Test Report: KVM_Linux_crio 19872

                    
                      d8c730041b5457cdbe5017f8cce276eb986ed9a4:2024-10-28:36847
                    
                

Test fail (32/314)

Order failed test Duration
36 TestAddons/parallel/Ingress 155.58
38 TestAddons/parallel/MetricsServer 366.3
47 TestAddons/StoppedEnableDisable 154.3
166 TestMultiControlPlane/serial/StopSecondaryNode 141.35
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.51
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.41
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.29
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 268.96
173 TestMultiControlPlane/serial/StopCluster 141.87
233 TestMultiNode/serial/RestartKeepsNodes 329.81
235 TestMultiNode/serial/StopMultiNode 145.19
242 TestPreload 239.62
250 TestKubernetesUpgrade 427.39
266 TestPause/serial/SecondStartNoReconfiguration 59.52
287 TestStartStop/group/old-k8s-version/serial/FirstStart 273.48
294 TestStartStop/group/embed-certs/serial/Stop 139.43
297 TestStartStop/group/no-preload/serial/Stop 139.25
300 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 111.8
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.96
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/SecondStart 734.57
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.94
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.89
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.84
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.18
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 387
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 484.11
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 341.56
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 136.2
x
+
TestAddons/parallel/Ingress (155.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-186035 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-186035 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-186035 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b40f41cd-78f5-4945-99b4-5630913ebfca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b40f41cd-78f5-4945-99b4-5630913ebfca] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004335002s
I1028 17:11:24.235756   20680 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-186035 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.226426943s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-186035 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.15
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-186035 -n addons-186035
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 logs -n 25: (1.216770867s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| delete  | -p download-only-852823                                                                     | download-only-852823 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| delete  | -p download-only-565697                                                                     | download-only-565697 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| delete  | -p download-only-852823                                                                     | download-only-852823 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-523787 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | binary-mirror-523787                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-523787                                                                     | binary-mirror-523787 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| addons  | enable dashboard -p                                                                         | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-186035                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-186035                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-186035 --wait=true                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | -p addons-186035                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-186035 ip                                                                            | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-186035 ssh curl -s                                                                   | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-186035 ssh cat                                                                       | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | /opt/local-path-provisioner/pvc-055034d5-d0f2-4684-852f-71b9bf776565_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:12 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:12 UTC | 28 Oct 24 17:12 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:12 UTC | 28 Oct 24 17:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:12 UTC | 28 Oct 24 17:12 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-186035 ip                                                                            | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:13 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:07:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:07:06.023262   21482 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:07:06.023369   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:06.023377   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:07:06.023381   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:06.023542   21482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:07:06.024065   21482 out.go:352] Setting JSON to false
	I1028 17:07:06.024887   21482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2969,"bootTime":1730132257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:07:06.024998   21482 start.go:139] virtualization: kvm guest
	I1028 17:07:06.026865   21482 out.go:177] * [addons-186035] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:07:06.028386   21482 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:07:06.028406   21482 notify.go:220] Checking for updates...
	I1028 17:07:06.030791   21482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:07:06.032241   21482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:07:06.033385   21482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:07:06.034487   21482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:07:06.035640   21482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:07:06.037075   21482 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:07:06.068523   21482 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 17:07:06.069666   21482 start.go:297] selected driver: kvm2
	I1028 17:07:06.069678   21482 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:07:06.069688   21482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:07:06.070336   21482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:07:06.070395   21482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:07:06.084040   21482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:07:06.084078   21482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:07:06.084336   21482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:07:06.084364   21482 cni.go:84] Creating CNI manager for ""
	I1028 17:07:06.084408   21482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:07:06.084418   21482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 17:07:06.084457   21482 start.go:340] cluster config:
	{Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:06.084596   21482 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:07:06.086134   21482 out.go:177] * Starting "addons-186035" primary control-plane node in "addons-186035" cluster
	I1028 17:07:06.087292   21482 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:06.087316   21482 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:07:06.087328   21482 cache.go:56] Caching tarball of preloaded images
	I1028 17:07:06.087390   21482 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:07:06.087402   21482 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:07:06.087681   21482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/config.json ...
	I1028 17:07:06.087709   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/config.json: {Name:mk56e20b9d6db6d349c73c0ce52b4e46b329f082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:06.087845   21482 start.go:360] acquireMachinesLock for addons-186035: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:07:06.087901   21482 start.go:364] duration metric: took 40.37µs to acquireMachinesLock for "addons-186035"
	I1028 17:07:06.087921   21482 start.go:93] Provisioning new machine with config: &{Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:07:06.087978   21482 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 17:07:06.089587   21482 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 17:07:06.089694   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:06.089742   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:06.102800   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I1028 17:07:06.103134   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:06.103648   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:06.103668   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:06.104012   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:06.104202   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:06.104335   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:06.104495   21482 start.go:159] libmachine.API.Create for "addons-186035" (driver="kvm2")
	I1028 17:07:06.104533   21482 client.go:168] LocalClient.Create starting
	I1028 17:07:06.104567   21482 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:07:06.209214   21482 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:07:06.360262   21482 main.go:141] libmachine: Running pre-create checks...
	I1028 17:07:06.360284   21482 main.go:141] libmachine: (addons-186035) Calling .PreCreateCheck
	I1028 17:07:06.360779   21482 main.go:141] libmachine: (addons-186035) Calling .GetConfigRaw
	I1028 17:07:06.361169   21482 main.go:141] libmachine: Creating machine...
	I1028 17:07:06.361183   21482 main.go:141] libmachine: (addons-186035) Calling .Create
	I1028 17:07:06.361330   21482 main.go:141] libmachine: (addons-186035) Creating KVM machine...
	I1028 17:07:06.362564   21482 main.go:141] libmachine: (addons-186035) DBG | found existing default KVM network
	I1028 17:07:06.363270   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.363133   21504 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a40}
	I1028 17:07:06.363290   21482 main.go:141] libmachine: (addons-186035) DBG | created network xml: 
	I1028 17:07:06.363309   21482 main.go:141] libmachine: (addons-186035) DBG | <network>
	I1028 17:07:06.363320   21482 main.go:141] libmachine: (addons-186035) DBG |   <name>mk-addons-186035</name>
	I1028 17:07:06.363330   21482 main.go:141] libmachine: (addons-186035) DBG |   <dns enable='no'/>
	I1028 17:07:06.363339   21482 main.go:141] libmachine: (addons-186035) DBG |   
	I1028 17:07:06.363354   21482 main.go:141] libmachine: (addons-186035) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 17:07:06.363369   21482 main.go:141] libmachine: (addons-186035) DBG |     <dhcp>
	I1028 17:07:06.363383   21482 main.go:141] libmachine: (addons-186035) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 17:07:06.363393   21482 main.go:141] libmachine: (addons-186035) DBG |     </dhcp>
	I1028 17:07:06.363403   21482 main.go:141] libmachine: (addons-186035) DBG |   </ip>
	I1028 17:07:06.363410   21482 main.go:141] libmachine: (addons-186035) DBG |   
	I1028 17:07:06.363422   21482 main.go:141] libmachine: (addons-186035) DBG | </network>
	I1028 17:07:06.363432   21482 main.go:141] libmachine: (addons-186035) DBG | 
	I1028 17:07:06.368447   21482 main.go:141] libmachine: (addons-186035) DBG | trying to create private KVM network mk-addons-186035 192.168.39.0/24...
	I1028 17:07:06.429592   21482 main.go:141] libmachine: (addons-186035) DBG | private KVM network mk-addons-186035 192.168.39.0/24 created
	I1028 17:07:06.429626   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.429547   21504 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:07:06.429645   21482 main.go:141] libmachine: (addons-186035) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035 ...
	I1028 17:07:06.429675   21482 main.go:141] libmachine: (addons-186035) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:07:06.429694   21482 main.go:141] libmachine: (addons-186035) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:07:06.703268   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.703138   21504 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa...
	I1028 17:07:06.820321   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.820217   21504 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/addons-186035.rawdisk...
	I1028 17:07:06.820368   21482 main.go:141] libmachine: (addons-186035) DBG | Writing magic tar header
	I1028 17:07:06.820382   21482 main.go:141] libmachine: (addons-186035) DBG | Writing SSH key tar header
	I1028 17:07:06.820398   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.820348   21504 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035 ...
	I1028 17:07:06.820496   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035
	I1028 17:07:06.820520   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035 (perms=drwx------)
	I1028 17:07:06.820533   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:07:06.820546   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:07:06.820555   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:07:06.820566   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:07:06.820575   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:07:06.820591   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:07:06.820605   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:07:06.820617   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home
	I1028 17:07:06.820629   21482 main.go:141] libmachine: (addons-186035) DBG | Skipping /home - not owner
	I1028 17:07:06.820645   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:07:06.820662   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:07:06.820677   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:07:06.820687   21482 main.go:141] libmachine: (addons-186035) Creating domain...
	I1028 17:07:06.821616   21482 main.go:141] libmachine: (addons-186035) define libvirt domain using xml: 
	I1028 17:07:06.821652   21482 main.go:141] libmachine: (addons-186035) <domain type='kvm'>
	I1028 17:07:06.821662   21482 main.go:141] libmachine: (addons-186035)   <name>addons-186035</name>
	I1028 17:07:06.821675   21482 main.go:141] libmachine: (addons-186035)   <memory unit='MiB'>4000</memory>
	I1028 17:07:06.821699   21482 main.go:141] libmachine: (addons-186035)   <vcpu>2</vcpu>
	I1028 17:07:06.821713   21482 main.go:141] libmachine: (addons-186035)   <features>
	I1028 17:07:06.821742   21482 main.go:141] libmachine: (addons-186035)     <acpi/>
	I1028 17:07:06.821765   21482 main.go:141] libmachine: (addons-186035)     <apic/>
	I1028 17:07:06.821777   21482 main.go:141] libmachine: (addons-186035)     <pae/>
	I1028 17:07:06.821793   21482 main.go:141] libmachine: (addons-186035)     
	I1028 17:07:06.821806   21482 main.go:141] libmachine: (addons-186035)   </features>
	I1028 17:07:06.821818   21482 main.go:141] libmachine: (addons-186035)   <cpu mode='host-passthrough'>
	I1028 17:07:06.821829   21482 main.go:141] libmachine: (addons-186035)   
	I1028 17:07:06.821850   21482 main.go:141] libmachine: (addons-186035)   </cpu>
	I1028 17:07:06.821861   21482 main.go:141] libmachine: (addons-186035)   <os>
	I1028 17:07:06.821873   21482 main.go:141] libmachine: (addons-186035)     <type>hvm</type>
	I1028 17:07:06.821885   21482 main.go:141] libmachine: (addons-186035)     <boot dev='cdrom'/>
	I1028 17:07:06.821895   21482 main.go:141] libmachine: (addons-186035)     <boot dev='hd'/>
	I1028 17:07:06.821906   21482 main.go:141] libmachine: (addons-186035)     <bootmenu enable='no'/>
	I1028 17:07:06.821914   21482 main.go:141] libmachine: (addons-186035)   </os>
	I1028 17:07:06.821925   21482 main.go:141] libmachine: (addons-186035)   <devices>
	I1028 17:07:06.821936   21482 main.go:141] libmachine: (addons-186035)     <disk type='file' device='cdrom'>
	I1028 17:07:06.821956   21482 main.go:141] libmachine: (addons-186035)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/boot2docker.iso'/>
	I1028 17:07:06.821971   21482 main.go:141] libmachine: (addons-186035)       <target dev='hdc' bus='scsi'/>
	I1028 17:07:06.821983   21482 main.go:141] libmachine: (addons-186035)       <readonly/>
	I1028 17:07:06.821991   21482 main.go:141] libmachine: (addons-186035)     </disk>
	I1028 17:07:06.822001   21482 main.go:141] libmachine: (addons-186035)     <disk type='file' device='disk'>
	I1028 17:07:06.822014   21482 main.go:141] libmachine: (addons-186035)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:07:06.822033   21482 main.go:141] libmachine: (addons-186035)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/addons-186035.rawdisk'/>
	I1028 17:07:06.822048   21482 main.go:141] libmachine: (addons-186035)       <target dev='hda' bus='virtio'/>
	I1028 17:07:06.822065   21482 main.go:141] libmachine: (addons-186035)     </disk>
	I1028 17:07:06.822082   21482 main.go:141] libmachine: (addons-186035)     <interface type='network'>
	I1028 17:07:06.822095   21482 main.go:141] libmachine: (addons-186035)       <source network='mk-addons-186035'/>
	I1028 17:07:06.822108   21482 main.go:141] libmachine: (addons-186035)       <model type='virtio'/>
	I1028 17:07:06.822119   21482 main.go:141] libmachine: (addons-186035)     </interface>
	I1028 17:07:06.822127   21482 main.go:141] libmachine: (addons-186035)     <interface type='network'>
	I1028 17:07:06.822139   21482 main.go:141] libmachine: (addons-186035)       <source network='default'/>
	I1028 17:07:06.822159   21482 main.go:141] libmachine: (addons-186035)       <model type='virtio'/>
	I1028 17:07:06.822171   21482 main.go:141] libmachine: (addons-186035)     </interface>
	I1028 17:07:06.822183   21482 main.go:141] libmachine: (addons-186035)     <serial type='pty'>
	I1028 17:07:06.822194   21482 main.go:141] libmachine: (addons-186035)       <target port='0'/>
	I1028 17:07:06.822203   21482 main.go:141] libmachine: (addons-186035)     </serial>
	I1028 17:07:06.822219   21482 main.go:141] libmachine: (addons-186035)     <console type='pty'>
	I1028 17:07:06.822231   21482 main.go:141] libmachine: (addons-186035)       <target type='serial' port='0'/>
	I1028 17:07:06.822243   21482 main.go:141] libmachine: (addons-186035)     </console>
	I1028 17:07:06.822257   21482 main.go:141] libmachine: (addons-186035)     <rng model='virtio'>
	I1028 17:07:06.822271   21482 main.go:141] libmachine: (addons-186035)       <backend model='random'>/dev/random</backend>
	I1028 17:07:06.822280   21482 main.go:141] libmachine: (addons-186035)     </rng>
	I1028 17:07:06.822290   21482 main.go:141] libmachine: (addons-186035)     
	I1028 17:07:06.822298   21482 main.go:141] libmachine: (addons-186035)     
	I1028 17:07:06.822309   21482 main.go:141] libmachine: (addons-186035)   </devices>
	I1028 17:07:06.822317   21482 main.go:141] libmachine: (addons-186035) </domain>
	I1028 17:07:06.822334   21482 main.go:141] libmachine: (addons-186035) 
	I1028 17:07:06.827859   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:4f:55:51 in network default
	I1028 17:07:06.828371   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:06.828389   21482 main.go:141] libmachine: (addons-186035) Ensuring networks are active...
	I1028 17:07:06.828934   21482 main.go:141] libmachine: (addons-186035) Ensuring network default is active
	I1028 17:07:06.829243   21482 main.go:141] libmachine: (addons-186035) Ensuring network mk-addons-186035 is active
	I1028 17:07:06.829685   21482 main.go:141] libmachine: (addons-186035) Getting domain xml...
	I1028 17:07:06.830337   21482 main.go:141] libmachine: (addons-186035) Creating domain...
	I1028 17:07:08.201932   21482 main.go:141] libmachine: (addons-186035) Waiting to get IP...
	I1028 17:07:08.202806   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:08.203095   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:08.203142   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:08.203086   21504 retry.go:31] will retry after 211.26097ms: waiting for machine to come up
	I1028 17:07:08.415296   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:08.415717   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:08.415746   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:08.415668   21504 retry.go:31] will retry after 338.97837ms: waiting for machine to come up
	I1028 17:07:08.756084   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:08.756484   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:08.756515   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:08.756416   21504 retry.go:31] will retry after 431.773016ms: waiting for machine to come up
	I1028 17:07:09.189885   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:09.190293   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:09.190318   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:09.190254   21504 retry.go:31] will retry after 507.772359ms: waiting for machine to come up
	I1028 17:07:09.699830   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:09.700184   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:09.700209   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:09.700134   21504 retry.go:31] will retry after 758.007253ms: waiting for machine to come up
	I1028 17:07:10.459957   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:10.460389   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:10.460414   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:10.460340   21504 retry.go:31] will retry after 903.570429ms: waiting for machine to come up
	I1028 17:07:11.364881   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:11.365302   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:11.365361   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:11.365296   21504 retry.go:31] will retry after 1.054833216s: waiting for machine to come up
	I1028 17:07:12.421406   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:12.421827   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:12.421850   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:12.421780   21504 retry.go:31] will retry after 1.246115446s: waiting for machine to come up
	I1028 17:07:13.670059   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:13.670436   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:13.670472   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:13.670400   21504 retry.go:31] will retry after 1.569122093s: waiting for machine to come up
	I1028 17:07:15.241605   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:15.241983   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:15.242015   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:15.241925   21504 retry.go:31] will retry after 1.64438524s: waiting for machine to come up
	I1028 17:07:16.888910   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:16.889350   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:16.889379   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:16.889308   21504 retry.go:31] will retry after 2.156287404s: waiting for machine to come up
	I1028 17:07:19.046824   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:19.047200   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:19.047225   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:19.047151   21504 retry.go:31] will retry after 3.084774607s: waiting for machine to come up
	I1028 17:07:22.133426   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:22.133774   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:22.133806   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:22.133714   21504 retry.go:31] will retry after 4.405522494s: waiting for machine to come up
	I1028 17:07:26.540979   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:26.541414   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:26.541437   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:26.541388   21504 retry.go:31] will retry after 4.107542395s: waiting for machine to come up
	I1028 17:07:30.653515   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.653928   21482 main.go:141] libmachine: (addons-186035) Found IP for machine: 192.168.39.15
	I1028 17:07:30.653955   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has current primary IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.653962   21482 main.go:141] libmachine: (addons-186035) Reserving static IP address...
	I1028 17:07:30.654400   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find host DHCP lease matching {name: "addons-186035", mac: "52:54:00:fd:e8:0a", ip: "192.168.39.15"} in network mk-addons-186035
	I1028 17:07:30.721605   21482 main.go:141] libmachine: (addons-186035) DBG | Getting to WaitForSSH function...
	I1028 17:07:30.721636   21482 main.go:141] libmachine: (addons-186035) Reserved static IP address: 192.168.39.15
	I1028 17:07:30.721668   21482 main.go:141] libmachine: (addons-186035) Waiting for SSH to be available...
	I1028 17:07:30.723800   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.724146   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:30.724170   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.724369   21482 main.go:141] libmachine: (addons-186035) DBG | Using SSH client type: external
	I1028 17:07:30.724407   21482 main.go:141] libmachine: (addons-186035) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa (-rw-------)
	I1028 17:07:30.724437   21482 main.go:141] libmachine: (addons-186035) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:07:30.724451   21482 main.go:141] libmachine: (addons-186035) DBG | About to run SSH command:
	I1028 17:07:30.724461   21482 main.go:141] libmachine: (addons-186035) DBG | exit 0
	I1028 17:07:30.848262   21482 main.go:141] libmachine: (addons-186035) DBG | SSH cmd err, output: <nil>: 
	I1028 17:07:30.848490   21482 main.go:141] libmachine: (addons-186035) KVM machine creation complete!
	I1028 17:07:30.848760   21482 main.go:141] libmachine: (addons-186035) Calling .GetConfigRaw
	I1028 17:07:30.849275   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:30.849435   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:30.849576   21482 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:07:30.849591   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:30.850766   21482 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:07:30.850777   21482 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:07:30.850782   21482 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:07:30.850787   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:30.852722   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.853081   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:30.853110   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.853219   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:30.853390   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.853513   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.853649   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:30.853783   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:30.854020   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:30.854039   21482 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:07:30.951656   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:07:30.951680   21482 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:07:30.951690   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:30.954210   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.954524   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:30.954547   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.954704   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:30.954900   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.955051   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.955178   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:30.955320   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:30.955523   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:30.955537   21482 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:07:31.052931   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:07:31.053013   21482 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:07:31.053025   21482 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:07:31.053034   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:31.053278   21482 buildroot.go:166] provisioning hostname "addons-186035"
	I1028 17:07:31.053307   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:31.053453   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.055934   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.056239   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.056256   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.056367   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.056528   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.056677   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.056786   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.056943   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.057126   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.057141   21482 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-186035 && echo "addons-186035" | sudo tee /etc/hostname
	I1028 17:07:31.170205   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-186035
	
	I1028 17:07:31.170231   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.172999   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.173320   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.173343   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.173539   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.173707   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.173842   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.173941   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.174083   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.174716   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.174746   21482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-186035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-186035/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-186035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:07:31.280812   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:07:31.280841   21482 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:07:31.280857   21482 buildroot.go:174] setting up certificates
	I1028 17:07:31.280867   21482 provision.go:84] configureAuth start
	I1028 17:07:31.280875   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:31.281143   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:31.283705   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.284047   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.284069   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.284261   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.286261   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.286575   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.286600   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.286729   21482 provision.go:143] copyHostCerts
	I1028 17:07:31.286800   21482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:07:31.286912   21482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:07:31.286973   21482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:07:31.287032   21482 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.addons-186035 san=[127.0.0.1 192.168.39.15 addons-186035 localhost minikube]
	I1028 17:07:31.489724   21482 provision.go:177] copyRemoteCerts
	I1028 17:07:31.489778   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:07:31.489799   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.492266   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.492638   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.492665   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.492827   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.493005   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.493161   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.493279   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:31.570093   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:07:31.592765   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:07:31.615119   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:07:31.637067   21482 provision.go:87] duration metric: took 356.189922ms to configureAuth
	I1028 17:07:31.637092   21482 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:07:31.637286   21482 config.go:182] Loaded profile config "addons-186035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:31.637432   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.639858   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.640166   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.640194   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.640360   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.640551   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.640712   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.640828   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.640964   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.641159   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.641174   21482 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:07:31.852812   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:07:31.852837   21482 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:07:31.852847   21482 main.go:141] libmachine: (addons-186035) Calling .GetURL
	I1028 17:07:31.854449   21482 main.go:141] libmachine: (addons-186035) DBG | Using libvirt version 6000000
	I1028 17:07:31.856748   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.857085   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.857111   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.857223   21482 main.go:141] libmachine: Docker is up and running!
	I1028 17:07:31.857238   21482 main.go:141] libmachine: Reticulating splines...
	I1028 17:07:31.857244   21482 client.go:171] duration metric: took 25.752701898s to LocalClient.Create
	I1028 17:07:31.857257   21482 start.go:167] duration metric: took 25.752765567s to libmachine.API.Create "addons-186035"
	I1028 17:07:31.857267   21482 start.go:293] postStartSetup for "addons-186035" (driver="kvm2")
	I1028 17:07:31.857276   21482 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:07:31.857291   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:31.857513   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:07:31.857540   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.859746   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.860015   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.860033   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.860216   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.860365   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.860546   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.860689   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:31.938580   21482 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:07:31.942681   21482 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:07:31.942709   21482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:07:31.942794   21482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:07:31.942826   21482 start.go:296] duration metric: took 85.553049ms for postStartSetup
	I1028 17:07:31.942865   21482 main.go:141] libmachine: (addons-186035) Calling .GetConfigRaw
	I1028 17:07:31.943430   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:31.945814   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.946185   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.946212   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.946399   21482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/config.json ...
	I1028 17:07:31.946565   21482 start.go:128] duration metric: took 25.85857794s to createHost
	I1028 17:07:31.946586   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.948702   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.949032   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.949056   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.949161   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.949312   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.949441   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.949544   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.949663   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.949816   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.949826   21482 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:07:32.044690   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730135252.013899420
	
	I1028 17:07:32.044714   21482 fix.go:216] guest clock: 1730135252.013899420
	I1028 17:07:32.044723   21482 fix.go:229] Guest: 2024-10-28 17:07:32.01389942 +0000 UTC Remote: 2024-10-28 17:07:31.946575948 +0000 UTC m=+25.957944270 (delta=67.323472ms)
	I1028 17:07:32.044760   21482 fix.go:200] guest clock delta is within tolerance: 67.323472ms
	I1028 17:07:32.044767   21482 start.go:83] releasing machines lock for "addons-186035", held for 25.956855526s
	I1028 17:07:32.044785   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.045042   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:32.047595   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.047988   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:32.048009   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.048189   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.048675   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.048816   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.048916   21482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:07:32.048958   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:32.048988   21482 ssh_runner.go:195] Run: cat /version.json
	I1028 17:07:32.049007   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:32.051330   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.051636   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.051669   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:32.051712   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.051793   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:32.051958   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:32.052097   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:32.052120   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.052136   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:32.052275   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:32.052289   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:32.052413   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:32.052539   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:32.052677   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:32.149172   21482 ssh_runner.go:195] Run: systemctl --version
	I1028 17:07:32.155030   21482 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:07:32.312931   21482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:07:32.318889   21482 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:07:32.318945   21482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:07:32.334582   21482 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:07:32.334601   21482 start.go:495] detecting cgroup driver to use...
	I1028 17:07:32.334646   21482 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:07:32.350793   21482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:07:32.364418   21482 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:07:32.364454   21482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:07:32.377495   21482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:07:32.390831   21482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:07:32.499414   21482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:07:32.656723   21482 docker.go:233] disabling docker service ...
	I1028 17:07:32.656777   21482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:07:32.670576   21482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:07:32.683025   21482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:07:32.789823   21482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:07:32.893875   21482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:07:32.907462   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:07:32.924915   21482 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:07:32.924962   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.935334   21482 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:07:32.935409   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.945690   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.955838   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.966144   21482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:07:32.976679   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.986688   21482 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:33.002765   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:33.012790   21482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:07:33.021810   21482 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:07:33.021851   21482 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:07:33.034728   21482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:07:33.043688   21482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:33.150990   21482 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:07:33.245922   21482 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:07:33.246032   21482 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:07:33.250528   21482 start.go:563] Will wait 60s for crictl version
	I1028 17:07:33.250580   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:07:33.254243   21482 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:07:33.291843   21482 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:07:33.291971   21482 ssh_runner.go:195] Run: crio --version
	I1028 17:07:33.318401   21482 ssh_runner.go:195] Run: crio --version
	I1028 17:07:33.347151   21482 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:07:33.348495   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:33.350869   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:33.351144   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:33.351174   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:33.351340   21482 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:07:33.355278   21482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:33.367754   21482 kubeadm.go:883] updating cluster {Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:07:33.367854   21482 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:33.367893   21482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:33.402912   21482 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 17:07:33.402969   21482 ssh_runner.go:195] Run: which lz4
	I1028 17:07:33.406802   21482 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 17:07:33.410880   21482 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 17:07:33.410904   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 17:07:34.643209   21482 crio.go:462] duration metric: took 1.236426115s to copy over tarball
	I1028 17:07:34.643286   21482 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 17:07:36.700078   21482 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.05675708s)
	I1028 17:07:36.700100   21482 crio.go:469] duration metric: took 2.056863264s to extract the tarball
	I1028 17:07:36.700108   21482 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 17:07:36.737841   21482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:36.778730   21482 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:07:36.778758   21482 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:07:36.778769   21482 kubeadm.go:934] updating node { 192.168.39.15 8443 v1.31.2 crio true true} ...
	I1028 17:07:36.778864   21482 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-186035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:07:36.778928   21482 ssh_runner.go:195] Run: crio config
	I1028 17:07:36.822774   21482 cni.go:84] Creating CNI manager for ""
	I1028 17:07:36.822800   21482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:07:36.822811   21482 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:07:36.822839   21482 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-186035 NodeName:addons-186035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:07:36.822989   21482 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-186035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:07:36.823065   21482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:07:36.832952   21482 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:07:36.833018   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 17:07:36.842254   21482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 17:07:36.857816   21482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:07:36.873386   21482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1028 17:07:36.888638   21482 ssh_runner.go:195] Run: grep 192.168.39.15	control-plane.minikube.internal$ /etc/hosts
	I1028 17:07:36.892006   21482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:36.903391   21482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:37.007615   21482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:07:37.023383   21482 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035 for IP: 192.168.39.15
	I1028 17:07:37.023403   21482 certs.go:194] generating shared ca certs ...
	I1028 17:07:37.023417   21482 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.023555   21482 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:07:37.094339   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt ...
	I1028 17:07:37.094367   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt: {Name:mkada548ed9e0c555f18d752b1d48c2553324d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.094547   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key ...
	I1028 17:07:37.094566   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key: {Name:mk7617196eb13bec3904d40a6eb678c962caa127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.094662   21482 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:07:37.296322   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt ...
	I1028 17:07:37.296350   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt: {Name:mk907c2ff38a41d71da690c87000fdec457eedf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.296536   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key ...
	I1028 17:07:37.296550   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key: {Name:mkd5769b54aa6510303440ab3c3d5990a21d9179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.296644   21482 certs.go:256] generating profile certs ...
	I1028 17:07:37.296714   21482 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.key
	I1028 17:07:37.296731   21482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt with IP's: []
	I1028 17:07:37.365116   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt ...
	I1028 17:07:37.365142   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: {Name:mk487e652aecd824a7f47239181ca89c76ddaa90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.365304   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.key ...
	I1028 17:07:37.365319   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.key: {Name:mka84a25792ede6a47b729c3ceff8f0cb7111375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.365419   21482 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a
	I1028 17:07:37.365437   21482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.15]
	I1028 17:07:37.473713   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a ...
	I1028 17:07:37.473743   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a: {Name:mk248109e4e732c5f785720069c3ec8f2de866d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.473908   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a ...
	I1028 17:07:37.473934   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a: {Name:mk50db299e4807dbfdcb03b09ebc15fd48dd67b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.474065   21482 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt
	I1028 17:07:37.474183   21482 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key
	I1028 17:07:37.474258   21482 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key
	I1028 17:07:37.474280   21482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt with IP's: []
	I1028 17:07:37.734531   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt ...
	I1028 17:07:37.734567   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt: {Name:mk442dfe2507a428f23025393ef9a62e46c131dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.734747   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key ...
	I1028 17:07:37.734762   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key: {Name:mkdb5af6742887099ce4f26b9a16b971f8da3993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.734951   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:07:37.735002   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:07:37.735039   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:07:37.735073   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:07:37.735666   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:07:37.769726   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:07:37.803916   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:07:37.830176   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:07:37.852090   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 17:07:37.873920   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:07:37.895585   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:07:37.916994   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 17:07:37.938543   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:07:37.960307   21482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:07:37.975584   21482 ssh_runner.go:195] Run: openssl version
	I1028 17:07:37.981026   21482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:07:37.991252   21482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:37.995565   21482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:37.995613   21482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:38.001121   21482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:07:38.011529   21482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:07:38.015267   21482 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:07:38.015315   21482 kubeadm.go:392] StartCluster: {Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:38.015391   21482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:07:38.015458   21482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:07:38.050576   21482 cri.go:89] found id: ""
	I1028 17:07:38.050640   21482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:07:38.060370   21482 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:07:38.070040   21482 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:07:38.079484   21482 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:07:38.079505   21482 kubeadm.go:157] found existing configuration files:
	
	I1028 17:07:38.079547   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:07:38.088148   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:07:38.088198   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:07:38.097156   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:07:38.105646   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:07:38.105695   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:07:38.114647   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:07:38.123228   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:07:38.123275   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:07:38.132235   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:07:38.140745   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:07:38.140781   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:07:38.149583   21482 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 17:07:38.198633   21482 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:07:38.198755   21482 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:07:38.295379   21482 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:07:38.295469   21482 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:07:38.295562   21482 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:07:38.305391   21482 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:07:38.307678   21482 out.go:235]   - Generating certificates and keys ...
	I1028 17:07:38.307781   21482 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:07:38.307867   21482 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:07:38.406005   21482 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:07:38.616391   21482 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:07:38.695199   21482 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:07:38.889115   21482 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:07:38.990534   21482 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:07:38.990839   21482 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-186035 localhost] and IPs [192.168.39.15 127.0.0.1 ::1]
	I1028 17:07:39.067578   21482 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:07:39.067968   21482 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-186035 localhost] and IPs [192.168.39.15 127.0.0.1 ::1]
	I1028 17:07:39.251059   21482 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:07:39.418889   21482 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:07:39.653294   21482 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:07:39.653553   21482 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:07:39.729619   21482 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:07:40.187636   21482 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:07:40.378175   21482 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:07:40.499205   21482 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:07:40.671893   21482 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:07:40.672418   21482 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:07:40.674801   21482 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:07:40.719322   21482 out.go:235]   - Booting up control plane ...
	I1028 17:07:40.719429   21482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:07:40.719503   21482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:07:40.719576   21482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:07:40.719706   21482 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:07:40.719825   21482 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:07:40.719890   21482 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:07:40.827039   21482 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:07:40.827184   21482 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:07:41.328506   21482 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.776952ms
	I1028 17:07:41.328601   21482 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:07:46.326901   21482 kubeadm.go:310] [api-check] The API server is healthy after 5.00108443s
	I1028 17:07:46.346228   21482 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:07:46.362805   21482 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:07:46.406830   21482 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:07:46.407060   21482 kubeadm.go:310] [mark-control-plane] Marking the node addons-186035 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:07:46.419772   21482 kubeadm.go:310] [bootstrap-token] Using token: dfdzjm.eymlbvu4shoxlmen
	I1028 17:07:46.421017   21482 out.go:235]   - Configuring RBAC rules ...
	I1028 17:07:46.421176   21482 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:07:46.428636   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:07:46.439235   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:07:46.444538   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:07:46.448194   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:07:46.454781   21482 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:07:46.732898   21482 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:07:47.172240   21482 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:07:47.732054   21482 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:07:47.738101   21482 kubeadm.go:310] 
	I1028 17:07:47.738181   21482 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:07:47.738194   21482 kubeadm.go:310] 
	I1028 17:07:47.738314   21482 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:07:47.738337   21482 kubeadm.go:310] 
	I1028 17:07:47.738388   21482 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:07:47.738516   21482 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:07:47.738598   21482 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:07:47.738610   21482 kubeadm.go:310] 
	I1028 17:07:47.738679   21482 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:07:47.738691   21482 kubeadm.go:310] 
	I1028 17:07:47.738746   21482 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:07:47.738755   21482 kubeadm.go:310] 
	I1028 17:07:47.738824   21482 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:07:47.738919   21482 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:07:47.739025   21482 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:07:47.739048   21482 kubeadm.go:310] 
	I1028 17:07:47.739166   21482 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:07:47.739281   21482 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:07:47.739297   21482 kubeadm.go:310] 
	I1028 17:07:47.739406   21482 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dfdzjm.eymlbvu4shoxlmen \
	I1028 17:07:47.739553   21482 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 17:07:47.739585   21482 kubeadm.go:310] 	--control-plane 
	I1028 17:07:47.739596   21482 kubeadm.go:310] 
	I1028 17:07:47.739721   21482 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:07:47.739741   21482 kubeadm.go:310] 
	I1028 17:07:47.739851   21482 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dfdzjm.eymlbvu4shoxlmen \
	I1028 17:07:47.739979   21482 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 17:07:47.741855   21482 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:07:47.741886   21482 cni.go:84] Creating CNI manager for ""
	I1028 17:07:47.741896   21482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:07:47.743459   21482 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 17:07:47.744658   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 17:07:47.755072   21482 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 17:07:47.775272   21482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:07:47.775341   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:47.775405   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-186035 minikube.k8s.io/updated_at=2024_10_28T17_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=addons-186035 minikube.k8s.io/primary=true
	I1028 17:07:47.795889   21482 ops.go:34] apiserver oom_adj: -16
	I1028 17:07:47.920087   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:48.420279   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:48.920721   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:49.420583   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:49.920445   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:50.420583   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:50.920910   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:51.420497   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:51.920444   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:52.420920   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:52.524157   21482 kubeadm.go:1113] duration metric: took 4.748871873s to wait for elevateKubeSystemPrivileges
	I1028 17:07:52.524203   21482 kubeadm.go:394] duration metric: took 14.508889603s to StartCluster
	I1028 17:07:52.524228   21482 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:52.524384   21482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:07:52.524906   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:52.525153   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:07:52.525175   21482 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:07:52.525245   21482 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 17:07:52.525385   21482 config.go:182] Loaded profile config "addons-186035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:52.525403   21482 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-186035"
	I1028 17:07:52.525401   21482 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-186035"
	I1028 17:07:52.525423   21482 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-186035"
	I1028 17:07:52.525428   21482 addons.go:69] Setting default-storageclass=true in profile "addons-186035"
	I1028 17:07:52.525440   21482 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-186035"
	I1028 17:07:52.525430   21482 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-186035"
	I1028 17:07:52.525465   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525474   21482 addons.go:69] Setting gcp-auth=true in profile "addons-186035"
	I1028 17:07:52.525455   21482 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-186035"
	I1028 17:07:52.525479   21482 addons.go:69] Setting registry=true in profile "addons-186035"
	I1028 17:07:52.525490   21482 addons.go:69] Setting ingress-dns=true in profile "addons-186035"
	I1028 17:07:52.525500   21482 addons.go:69] Setting inspektor-gadget=true in profile "addons-186035"
	I1028 17:07:52.525501   21482 addons.go:69] Setting storage-provisioner=true in profile "addons-186035"
	I1028 17:07:52.525507   21482 addons.go:234] Setting addon ingress-dns=true in "addons-186035"
	I1028 17:07:52.525511   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525513   21482 addons.go:234] Setting addon inspektor-gadget=true in "addons-186035"
	I1028 17:07:52.525513   21482 addons.go:234] Setting addon storage-provisioner=true in "addons-186035"
	I1028 17:07:52.525537   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525545   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525549   21482 addons.go:69] Setting volumesnapshots=true in profile "addons-186035"
	I1028 17:07:52.525564   21482 addons.go:234] Setting addon volumesnapshots=true in "addons-186035"
	I1028 17:07:52.525587   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525854   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525890   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525931   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525949   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525954   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525391   21482 addons.go:69] Setting yakd=true in profile "addons-186035"
	I1028 17:07:52.525971   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525979   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525983   21482 addons.go:69] Setting ingress=true in profile "addons-186035"
	I1028 17:07:52.525996   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.526046   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.526131   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525473   21482 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-186035"
	I1028 17:07:52.526001   21482 addons.go:234] Setting addon ingress=true in "addons-186035"
	I1028 17:07:52.526504   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525407   21482 addons.go:69] Setting metrics-server=true in profile "addons-186035"
	I1028 17:07:52.526561   21482 addons.go:234] Setting addon metrics-server=true in "addons-186035"
	I1028 17:07:52.526593   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.526593   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.526628   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.526859   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.526887   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.526988   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.527016   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525981   21482 addons.go:234] Setting addon yakd=true in "addons-186035"
	I1028 17:07:52.527088   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.527449   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.527475   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525541   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.527957   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.528016   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525492   21482 addons.go:234] Setting addon registry=true in "addons-186035"
	I1028 17:07:52.528188   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525463   21482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-186035"
	I1028 17:07:52.525466   21482 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-186035"
	I1028 17:07:52.528345   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525467   21482 addons.go:69] Setting volcano=true in profile "addons-186035"
	I1028 17:07:52.528625   21482 addons.go:234] Setting addon volcano=true in "addons-186035"
	I1028 17:07:52.528686   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525477   21482 addons.go:69] Setting cloud-spanner=true in profile "addons-186035"
	I1028 17:07:52.529112   21482 addons.go:234] Setting addon cloud-spanner=true in "addons-186035"
	I1028 17:07:52.529151   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525493   21482 mustload.go:65] Loading cluster: addons-186035
	I1028 17:07:52.532522   21482 out.go:177] * Verifying Kubernetes components...
	I1028 17:07:52.533951   21482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:52.546594   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1028 17:07:52.546808   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I1028 17:07:52.546840   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1028 17:07:52.547126   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.547252   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.547275   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.547814   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.547832   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.548189   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.548295   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.548314   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.548327   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.548365   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.548880   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1028 17:07:52.548914   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.548921   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.549256   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.549274   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.549311   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.549803   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.549835   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.550082   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.550126   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.550158   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.551703   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I1028 17:07:52.556743   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.556786   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.557035   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.557074   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.557394   21482 config.go:182] Loaded profile config "addons-186035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:52.557731   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.557775   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.558263   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.558306   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.558832   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.558877   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.559376   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.559411   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.559886   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.559921   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.560412   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.560444   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.565092   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I1028 17:07:52.565227   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I1028 17:07:52.565629   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.565736   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.565804   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.566349   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.566355   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.566366   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.566370   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.567089   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.567106   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.567164   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.567198   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.567778   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.567811   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.567883   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.567952   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.572197   21482 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-186035"
	I1028 17:07:52.572240   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.572729   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.572759   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.589114   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.589169   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.595230   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I1028 17:07:52.595458   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
	I1028 17:07:52.596462   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.597106   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.597127   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.597593   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I1028 17:07:52.597730   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.597801   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I1028 17:07:52.597954   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.598042   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.598483   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.598636   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.598648   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.599756   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.599857   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I1028 17:07:52.599944   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.600019   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.600427   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.600461   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.600968   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.601491   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.601509   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.601567   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.602073   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.602092   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.602399   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.603012   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.603056   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.603413   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.603475   21482 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 17:07:52.603588   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I1028 17:07:52.603880   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.604083   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.604130   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.604201   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.604592   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.604613   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.604721   21482 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 17:07:52.604739   21482 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 17:07:52.604765   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.604912   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.605125   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.605146   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.605204   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.605460   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.605587   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.606007   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
	I1028 17:07:52.607309   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
	I1028 17:07:52.607828   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.608546   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.608766   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.608792   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.609002   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.609034   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.609107   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.609286   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.609341   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.609459   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.609573   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.609680   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.609977   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I1028 17:07:52.610094   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.610802   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.610998   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.611020   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.611129   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.611451   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.611705   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.612108   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.612385   21482 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 17:07:52.612646   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.612669   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.613230   21482 addons.go:234] Setting addon default-storageclass=true in "addons-186035"
	I1028 17:07:52.613274   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.613624   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.613663   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.614103   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I1028 17:07:52.614369   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I1028 17:07:52.614587   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.614657   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.614731   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.614768   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 17:07:52.614831   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41759
	I1028 17:07:52.614916   21482 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:07:52.614927   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 17:07:52.614943   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.615363   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.615402   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.616033   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.616047   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.616103   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.616406   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 17:07:52.616421   21482 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 17:07:52.616438   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.616690   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.616709   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.616847   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.617162   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.617619   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.617647   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.617679   21482 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 17:07:52.617773   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.617806   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.618409   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.618892   21482 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:07:52.618907   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 17:07:52.618922   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.620157   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.620179   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.620613   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.621672   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.621698   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.621716   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.621886   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.621920   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.622764   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.623486   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.623599   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.623739   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.623846   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.623944   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.624757   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.624777   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.624803   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I1028 17:07:52.625064   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.625119   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.625455   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.625570   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.625657   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.625676   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.625677   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.625725   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.625858   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.625978   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.626181   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.626196   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.626254   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.626588   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.627119   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.627151   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.645347   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I1028 17:07:52.645945   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.646267   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I1028 17:07:52.646620   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I1028 17:07:52.646796   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.647007   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.647169   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.647180   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.647304   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.647317   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.648413   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.648477   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.648511   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.648534   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.649226   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.649267   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.649720   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.649761   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.649978   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1028 17:07:52.649988   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36727
	I1028 17:07:52.649994   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.650281   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.650554   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.650617   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.651037   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.651054   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.651424   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.651608   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.651926   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.651943   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.652351   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.652563   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.653103   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.654166   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.654378   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:52.654399   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:52.656097   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I1028 17:07:52.656102   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:52.656129   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:52.656134   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:52.656139   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:52.656143   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:52.656369   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:52.656399   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:52.656407   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	W1028 17:07:52.656570   21482 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 17:07:52.657184   21482 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 17:07:52.657442   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.657651   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I1028 17:07:52.658060   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.658118   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.658528   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.658547   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.658956   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.658971   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.659022   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.659273   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.659335   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.659473   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.659868   21482 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 17:07:52.659961   21482 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 17:07:52.661031   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 17:07:52.661050   21482 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 17:07:52.661075   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.661208   21482 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 17:07:52.661218   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 17:07:52.661232   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.662012   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I1028 17:07:52.662159   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I1028 17:07:52.662579   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.662651   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.662757   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I1028 17:07:52.663042   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.663292   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.663351   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.663400   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.663414   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.663421   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.663820   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.663926   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.664059   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.664184   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.664220   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.664266   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.664294   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.664737   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.664945   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.665066   21482 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 17:07:52.665149   21482 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 17:07:52.666265   21482 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:07:52.666289   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 17:07:52.666306   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.666368   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 17:07:52.666382   21482 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 17:07:52.666395   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.667805   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.668253   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.669223   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 17:07:52.669224   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 17:07:52.670127   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.670586   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.670609   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.670766   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.670819   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.670950   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.671099   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.671243   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.671266   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.671288   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.671407   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.671551   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.671687   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.671689   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.671802   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.671825   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.672243   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.672262   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.672408   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 17:07:52.672414   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:07:52.672615   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.672635   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.672891   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.672963   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.673034   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.673149   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.673205   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.673392   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.673420   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.673728   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.673983   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I1028 17:07:52.674408   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.674775   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.674788   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.674844   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1028 17:07:52.675110   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:07:52.675126   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.675171   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.675129   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 17:07:52.675351   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.675881   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.675901   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.676315   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.676573   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.676684   21482 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:07:52.676700   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 17:07:52.676715   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.677359   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.677644   21482 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:07:52.677667   21482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:07:52.677683   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.677981   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 17:07:52.678944   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.679452   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45537
	I1028 17:07:52.679804   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.680248   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 17:07:52.680372   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.680387   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.680448   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34101
	I1028 17:07:52.680248   21482 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 17:07:52.680756   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.680801   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.681062   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.681202   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.681213   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.681833   21482 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 17:07:52.681849   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 17:07:52.681866   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.682247   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.682292   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.682428   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.682447   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.682473   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.682517   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.682875   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.682944   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.683020   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.683178   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 17:07:52.683552   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.684349   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.685280   21482 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:07:52.685338   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.684765   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.686463   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 17:07:52.686518   21482 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 17:07:52.686621   21482 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:07:52.686633   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:07:52.686647   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.686777   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.686846   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.686861   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.686940   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.687075   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.687152   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.687287   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.687302   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.687709   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.687901   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.688010   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.688141   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.688962   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 17:07:52.688973   21482 out.go:177]   - Using image docker.io/busybox:stable
	I1028 17:07:52.689244   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.689518   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.689542   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.689716   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.689887   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.689987   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.690160   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 17:07:52.690177   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 17:07:52.690184   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.690193   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.690192   21482 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:07:52.690237   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 17:07:52.690247   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.693427   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693455   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693722   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.693745   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693763   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.693781   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693880   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.694051   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.694073   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.694191   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.694206   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.694323   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.694355   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.694469   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	W1028 17:07:52.695271   21482 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54016->192.168.39.15:22: read: connection reset by peer
	I1028 17:07:52.695290   21482 retry.go:31] will retry after 267.962113ms: ssh: handshake failed: read tcp 192.168.39.1:54016->192.168.39.15:22: read: connection reset by peer
	I1028 17:07:52.992291   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:07:53.031200   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:07:53.137377   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 17:07:53.137762   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:07:53.157537   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:07:53.162008   21482 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:07:53.162027   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 17:07:53.165301   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 17:07:53.165317   21482 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 17:07:53.177271   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 17:07:53.177287   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 17:07:53.192585   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:07:53.195467   21482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 17:07:53.195480   21482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 17:07:53.217704   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 17:07:53.217723   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 17:07:53.256907   21482 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 17:07:53.256931   21482 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 17:07:53.293602   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:07:53.333544   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:07:53.336198   21482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:07:53.336259   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:07:53.358378   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 17:07:53.358398   21482 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 17:07:53.413385   21482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 17:07:53.413407   21482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 17:07:53.419354   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 17:07:53.419374   21482 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 17:07:53.424510   21482 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:07:53.424527   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 17:07:53.451477   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 17:07:53.451497   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 17:07:53.502001   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:07:53.617900   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:07:53.617923   21482 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 17:07:53.640962   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 17:07:53.640986   21482 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 17:07:53.663527   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:07:53.692384   21482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 17:07:53.692415   21482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 17:07:53.708314   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 17:07:53.708338   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 17:07:53.887027   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:07:53.932149   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 17:07:53.932181   21482 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 17:07:54.003409   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:07:54.003429   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 17:07:54.018125   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 17:07:54.018146   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 17:07:54.289059   21482 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:07:54.289079   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 17:07:54.320159   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:07:54.364024   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 17:07:54.364057   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 17:07:54.579938   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.587609067s)
	I1028 17:07:54.579985   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:54.579993   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:54.580296   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:54.580362   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:54.580374   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:54.580387   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:54.580396   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:54.580621   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:54.580637   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:54.655316   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 17:07:54.655339   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 17:07:54.723017   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:07:55.001460   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 17:07:55.001486   21482 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 17:07:55.279179   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 17:07:55.279202   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 17:07:55.608911   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 17:07:55.608932   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 17:07:55.927681   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:07:55.927723   21482 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 17:07:56.189657   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:07:57.000004   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.968767759s)
	I1028 17:07:57.000066   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.862285517s)
	I1028 17:07:57.000080   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000094   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000105   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000122   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000023   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.862613262s)
	I1028 17:07:57.000164   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000192   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000554   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000566   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000580   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000589   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000589   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000597   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000595   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000600   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000612   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000598   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000620   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000629   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000639   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000675   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000853   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000880   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000887   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000886   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000898   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000969   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000993   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.001001   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.199431   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.199494   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.199805   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.199863   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.199880   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.575231   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.417659106s)
	I1028 17:07:57.575671   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.383039171s)
	I1028 17:07:57.575730   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.575752   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.575839   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.575904   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.576019   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.576035   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.576045   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.576051   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.576158   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.576432   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.576460   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.576491   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.578115   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.578119   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.578142   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.578152   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.578168   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.578398   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.578412   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:59.165330   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.871692113s)
	I1028 17:07:59.165393   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:59.165412   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:59.165761   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:59.165781   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:59.165789   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:59.165797   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:59.165797   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:59.166053   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:59.166072   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:59.166094   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:59.677575   21482 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 17:07:59.677610   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:59.680719   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:59.681101   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:59.681132   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:59.681315   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:59.681491   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:59.681591   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:59.681679   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:08:00.007636   21482 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 17:08:00.052774   21482 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.716549471s)
	I1028 17:08:00.052797   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.719216936s)
	I1028 17:08:00.052844   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.052856   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.052865   21482 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.716579374s)
	I1028 17:08:00.052891   21482 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 17:08:00.052973   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.550945492s)
	I1028 17:08:00.052998   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053011   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053102   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.3895371s)
	I1028 17:08:00.053133   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053152   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053256   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.053301   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053317   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053318   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.733133317s)
	I1028 17:08:00.053326   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053337   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053352   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053426   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.053440   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053445   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.053448   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053457   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053477   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.053489   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053497   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053504   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053947   21482 node_ready.go:35] waiting up to 6m0s for node "addons-186035" to be "Ready" ...
	I1028 17:08:00.054065   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054091   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054098   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.054159   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054183   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054208   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054216   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.054226   21482 addons.go:475] Verifying addon registry=true in "addons-186035"
	I1028 17:08:00.054236   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054247   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.054254   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.054261   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.054543   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054576   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054582   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053270   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.166214019s)
	I1028 17:08:00.055239   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.055252   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.055350   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.055358   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.055368   21482 addons.go:475] Verifying addon ingress=true in "addons-186035"
	I1028 17:08:00.054207   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.056518   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.056533   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.056541   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.056548   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.056768   21482 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-186035 service yakd-dashboard -n yakd-dashboard
	
	I1028 17:08:00.056807   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.057174   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.057185   21482 addons.go:475] Verifying addon metrics-server=true in "addons-186035"
	I1028 17:08:00.056814   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.056877   21482 out.go:177] * Verifying registry addon...
	I1028 17:08:00.057591   21482 out.go:177] * Verifying ingress addon...
	I1028 17:08:00.058962   21482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 17:08:00.059980   21482 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 17:08:00.083213   21482 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 17:08:00.083243   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:00.084652   21482 node_ready.go:49] node "addons-186035" has status "Ready":"True"
	I1028 17:08:00.084671   21482 node_ready.go:38] duration metric: took 30.703689ms for node "addons-186035" to be "Ready" ...
	I1028 17:08:00.084678   21482 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:08:00.084682   21482 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 17:08:00.084697   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:00.093471   21482 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:00.164740   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.164763   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.165129   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.165151   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.165166   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.179805   21482 addons.go:234] Setting addon gcp-auth=true in "addons-186035"
	I1028 17:08:00.179849   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:08:00.180129   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:08:00.180162   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:08:00.194093   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34197
	I1028 17:08:00.194519   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:08:00.194982   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:08:00.195006   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:08:00.195323   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:08:00.195867   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:08:00.195916   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:08:00.209995   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I1028 17:08:00.210372   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:08:00.210817   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:08:00.210839   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:08:00.211156   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:08:00.211354   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:08:00.212829   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:08:00.213065   21482 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 17:08:00.213091   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:08:00.215442   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:08:00.215831   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:08:00.215858   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:08:00.215988   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:08:00.216138   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:08:00.216297   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:08:00.216434   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:08:00.575318   21482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-186035" context rescaled to 1 replicas
	I1028 17:08:00.620405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:00.620572   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:00.920067   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.197002085s)
	W1028 17:08:00.920122   21482 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:00.920153   21482 retry.go:31] will retry after 343.96168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:01.067182   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:01.070157   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:01.264691   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:01.566332   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:01.573463   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:02.093689   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.903983014s)
	I1028 17:08:02.093741   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:02.093749   21482 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.880661358s)
	I1028 17:08:02.093756   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:02.094106   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:02.094119   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:02.094135   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:02.094149   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:02.094156   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:02.094376   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:02.094394   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:02.094403   21482 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-186035"
	I1028 17:08:02.094407   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:02.095210   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:02.096013   21482 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 17:08:02.097296   21482 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 17:08:02.098239   21482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 17:08:02.098308   21482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 17:08:02.098322   21482 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 17:08:02.144105   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:02.144276   21482 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 17:08:02.144300   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:02.144302   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:02.236186   21482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 17:08:02.236215   21482 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 17:08:02.266097   21482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:02.266124   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 17:08:02.287509   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:02.441509   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:02.565991   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:02.566335   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:02.612600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:03.065354   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:03.065644   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:03.104661   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:03.194724   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.929982887s)
	I1028 17:08:03.194779   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.194795   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.195072   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.195089   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.195098   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.195106   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.195108   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:03.195355   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.195367   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.195384   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:03.583470   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:03.587388   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:03.616534   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.328985963s)
	I1028 17:08:03.616588   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.616604   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.616852   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.616867   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.616875   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.616880   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.617123   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.617171   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:03.617173   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.618143   21482 addons.go:475] Verifying addon gcp-auth=true in "addons-186035"
	I1028 17:08:03.619735   21482 out.go:177] * Verifying gcp-auth addon...
	I1028 17:08:03.621949   21482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 17:08:03.626938   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:03.675775   21482 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 17:08:03.675798   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:04.064260   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:04.064682   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:04.102419   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:04.125117   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:04.563428   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:04.564212   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:04.598220   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:04.664171   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:04.664832   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:05.064896   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:05.065096   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:05.102557   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:05.125922   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:05.563736   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:05.564613   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:05.664930   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:05.665237   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:06.064278   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:06.064758   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:06.102703   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:06.124760   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:06.563765   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:06.564119   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:06.600398   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:06.604018   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:06.625881   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:07.064772   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:07.065197   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:07.102774   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:07.126075   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:07.661495   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:07.661879   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:07.661984   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:07.665091   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:08.064122   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:08.065264   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:08.103705   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:08.125763   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:08.564521   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:08.564560   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:08.602430   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:08.625261   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:09.067396   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:09.067686   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:09.100320   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:09.103813   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:09.126344   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:09.562504   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:09.564313   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:09.601691   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:09.626078   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:10.062382   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:10.063869   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:10.102092   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:10.125907   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:10.563012   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:10.564566   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:10.602233   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:10.624341   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:11.069590   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:11.070169   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:11.170557   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:11.171415   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:11.564848   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:11.565964   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:11.600299   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:11.602859   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:11.624369   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:12.062912   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:12.064583   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:12.102933   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:12.125191   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:12.563946   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:12.564048   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:12.603455   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:12.625250   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:13.063891   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:13.064035   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:13.102773   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:13.125473   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:13.563486   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:13.564961   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:13.602486   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:13.625570   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:14.063649   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:14.063883   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:14.100041   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:14.102518   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:14.128700   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:14.563404   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:14.564018   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:14.602465   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:14.625767   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:15.065245   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:15.065311   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:15.102854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:15.125274   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:15.562511   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:15.564831   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:15.603841   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:15.625672   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:16.062834   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.064057   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.101783   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:16.125696   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:16.566695   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.566806   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.599556   21482 pod_ready.go:93] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.599579   21482 pod_ready.go:82] duration metric: took 16.50608757s for pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.599593   21482 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.601650   21482 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9zldx" not found
	I1028 17:08:16.601667   21482 pod_ready.go:82] duration metric: took 2.068887ms for pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace to be "Ready" ...
	E1028 17:08:16.601676   21482 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9zldx" not found
	I1028 17:08:16.601681   21482 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-znpww" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.603189   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:16.605560   21482 pod_ready.go:93] pod "coredns-7c65d6cfc9-znpww" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.605575   21482 pod_ready.go:82] duration metric: took 3.88807ms for pod "coredns-7c65d6cfc9-znpww" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.605585   21482 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.609080   21482 pod_ready.go:93] pod "etcd-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.609093   21482 pod_ready.go:82] duration metric: took 3.502025ms for pod "etcd-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.609103   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.613324   21482 pod_ready.go:93] pod "kube-apiserver-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.613341   21482 pod_ready.go:82] duration metric: took 4.230713ms for pod "kube-apiserver-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.613351   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.624015   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:16.798172   21482 pod_ready.go:93] pod "kube-controller-manager-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.798209   21482 pod_ready.go:82] duration metric: took 184.847708ms for pod "kube-controller-manager-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.798229   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qhnsh" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.064196   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.064776   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.103180   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:17.128989   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:17.197618   21482 pod_ready.go:93] pod "kube-proxy-qhnsh" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:17.197644   21482 pod_ready.go:82] duration metric: took 399.40754ms for pod "kube-proxy-qhnsh" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.197654   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.565210   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.566634   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.597416   21482 pod_ready.go:93] pod "kube-scheduler-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:17.597437   21482 pod_ready.go:82] duration metric: took 399.777939ms for pod "kube-scheduler-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.597447   21482 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.602789   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:17.624549   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:18.062940   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.064607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.103465   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.126027   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:18.562768   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.564516   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.602820   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.624331   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:19.064258   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.064492   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.103543   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.125509   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:19.564160   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.565280   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.602522   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.603068   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:19.625127   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.062833   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.063948   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.102888   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.124780   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.563443   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.563699   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.603494   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.624789   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.065493   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.065653   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.103119   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.128065   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.562189   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.564412   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.603308   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.603915   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:21.625069   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.063065   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.065223   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.103656   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.125112   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.562504   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.564672   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.604051   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.624596   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.062625   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.064113   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.103983   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.125153   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.563163   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.564607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.602600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.625373   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.062460   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.064582   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.103771   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.104019   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:24.126817   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.564462   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.564847   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.603100   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.626047   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.063259   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.065124   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.102410   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.125438   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.565176   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.565479   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.603320   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.625909   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.171781   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.172983   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.173976   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.174183   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.176671   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:26.564071   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.564277   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.603833   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.626562   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.067485   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.067876   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.103572   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.128885   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.561950   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.563969   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.602667   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.625144   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.062858   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.064176   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.106096   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.124951   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.563217   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.564837   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.601844   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.603204   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:28.625172   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.063482   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.064306   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.102405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.124537   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.563437   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.564683   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.602135   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.624611   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.063593   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.063779   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.102314   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.125159   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.562854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.564218   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.602854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.603435   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:30.625191   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.063723   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.063944   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.102134   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.125088   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.562681   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.563833   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.603068   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.627154   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.065865   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.066467   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.102535   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.124783   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.563963   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.564399   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.603461   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.604050   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:32.625015   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.064004   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.065335   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.102888   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.125122   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.563416   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.564694   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.603279   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.624705   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.063801   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.064956   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.102876   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.126256   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.562487   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.563716   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.605171   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.607869   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:34.625059   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.062852   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.063919   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.103499   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.124777   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.563895   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.564275   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.602678   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.625198   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.063276   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.064064   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.103062   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.124675   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.562422   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.564062   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.602847   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.625295   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.064627   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.064852   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.103125   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.104518   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:37.125058   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.564375   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.565160   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.603404   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.626894   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.063658   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.064168   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.103103   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.125226   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.564084   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.564520   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.602646   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.625027   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.063116   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.063580   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.103500   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.124482   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.563044   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.564395   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.603498   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.604310   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:39.624947   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.062843   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.064334   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.102653   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.124686   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.563742   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.564134   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.604408   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.625456   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.068721   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.069429   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.102783   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.125380   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.562630   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.564920   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.602500   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.625234   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.062592   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.065067   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.104016   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.106013   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:42.125814   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.565298   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.565726   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.603074   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.624726   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.063491   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.064282   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.102902   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.125357   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.563020   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.564672   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.602827   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.624099   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.064326   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.064688   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.102330   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:44.124818   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.566453   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.566912   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.612951   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:44.664103   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.665046   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.064204   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.064631   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.106426   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.125971   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:45.568424   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.568661   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.602848   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.625307   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.063740   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.063931   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.102741   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.124749   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.562312   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.563537   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.602662   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.625147   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.065769   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.065855   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.103028   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.104208   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:47.124318   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.562746   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.564203   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.604512   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.625092   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.063680   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.064944   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.104316   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.125312   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.563820   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.564025   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.603321   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.625198   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.062576   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.064907   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.102375   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.125410   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.562911   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.564373   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.603606   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:49.603884   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.625163   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.062507   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.066315   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.103288   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.125609   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.564862   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.565760   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.603686   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.625099   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.062772   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.065474   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.102577   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.124458   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.563841   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.564700   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.603425   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.608428   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:51.625419   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.064155   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.066107   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.103067   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.124798   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.563456   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.563906   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.602486   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.624416   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.064017   21482 kapi.go:107] duration metric: took 53.005051602s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 17:08:53.064297   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.166752   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.167057   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.565359   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.602659   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.624519   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.065167   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.102171   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.103112   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:54.125455   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.564446   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.603064   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.624817   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.063505   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.103977   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.125828   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.564998   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.603207   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.624869   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.064519   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.104275   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.105011   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:56.125608   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.564013   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.603433   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.624935   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.064542   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.105512   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.125374   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.564551   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.603823   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.626266   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.064428   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.105379   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.125279   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.674900   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.675385   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.676227   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.681390   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:59.064734   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.165692   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.166398   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:59.564328   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.602770   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.624640   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.067722   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.103547   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.124864   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.563959   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.602630   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.625664   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.064359   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.102826   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.104081   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:01.126003   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.565047   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.602886   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.625177   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.064597   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.103816   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:02.124361   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.564516   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.665311   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.666706   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.064485   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.105612   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.165272   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:03.567575   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.602983   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.604029   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:03.632040   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.201093   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.201600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:04.201837   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.565454   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.603309   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:04.625382   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.064321   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.102747   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.125390   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.565007   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.603585   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.606685   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:05.625093   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.064802   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.103985   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:06.125246   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.564317   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.602761   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:06.625333   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.064258   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.102529   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.125876   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.565768   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.606715   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:07.606974   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.625493   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.065197   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.104904   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.125323   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.564672   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.602583   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.624763   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.064709   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.103553   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.124314   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.566874   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.603230   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.610743   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:09.625340   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.066153   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.106244   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.127304   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.565383   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.604111   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.624975   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.063691   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.114424   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.124847   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.564556   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.603914   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.624509   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.064736   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.109924   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:12.113055   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.132971   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.566596   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.609994   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.667708   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.066322   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.103070   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:13.124846   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.564215   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.603250   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:13.625165   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.513075   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.514015   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.514249   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.517130   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:14.606252   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.609611   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.628253   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.065758   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.108804   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:15.125056   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.564340   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.602338   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:15.625587   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.064457   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.165521   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.165774   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.565300   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.604461   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.605819   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:16.625276   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.064359   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.103306   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.125132   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.564072   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.602636   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.626746   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.063891   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.102802   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.127424   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.566543   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.604696   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.606075   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:18.625852   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.064666   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.115045   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.126323   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.564401   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.602781   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.664784   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.064602   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.102595   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.125193   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.563960   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.622324   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.629214   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:20.629256   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.064590   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.103658   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.130089   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.564317   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.602944   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.625303   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.063593   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.103333   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:22.124448   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.568215   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.668871   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.669590   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.064335   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.102567   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.103667   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:23.124839   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:23.564031   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.602509   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.624568   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.064687   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:24.106654   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:24.125169   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.564699   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:24.602905   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:24.625340   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.063661   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:25.103411   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.104365   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:25.125091   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.564194   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:25.602495   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.627520   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.064962   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:26.102328   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:26.125276   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.565169   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:26.606991   21482 kapi.go:107] duration metric: took 1m24.508752784s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 17:09:26.624305   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.064603   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:27.125378   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.564840   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:27.603847   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:27.624297   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.064896   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:28.124661   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.565400   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:28.625595   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.065139   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:29.125712   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.565516   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:29.625110   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.064476   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:30.102674   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:30.124935   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.563580   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:30.625405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:31.064458   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:31.124789   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:31.564434   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:31.624421   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:32.064417   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:32.103759   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:32.125125   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:32.565658   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:32.624670   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:33.063600   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:33.124672   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:33.564752   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:33.624729   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:34.065035   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:34.124931   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:34.563808   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:34.603818   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:34.625648   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:35.063961   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:35.125693   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:35.565449   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:35.625415   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:36.064875   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:36.124910   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:36.564599   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:36.604280   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:36.624988   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:37.064127   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:37.125642   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:37.565355   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:37.625600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:38.063462   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:38.124837   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:38.564215   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:38.624499   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:39.064659   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:39.103618   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:39.125502   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:39.565245   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:39.625807   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:40.064244   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:40.126555   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:40.565084   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:40.624526   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:41.064837   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:41.125639   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:41.564331   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:41.603403   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:41.624851   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:42.064634   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:42.125137   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:42.564648   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:42.624616   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:43.063925   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:43.125265   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:43.564637   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:43.603602   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:43.625078   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:44.063979   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:44.125539   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:44.564913   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:44.625937   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:45.066687   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:45.125353   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:45.571313   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:45.607054   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:45.625638   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:46.064242   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:46.124446   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:46.564558   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:46.625975   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:47.064144   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:47.124927   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:47.564681   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:47.625930   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:48.064835   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:48.103871   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:48.125560   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:48.565566   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:48.611661   21482 pod_ready.go:93] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:48.611687   21482 pod_ready.go:82] duration metric: took 1m31.014233038s for pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:48.611698   21482 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rtk85" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:48.622310   21482 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rtk85" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:48.622330   21482 pod_ready.go:82] duration metric: took 10.624805ms for pod "nvidia-device-plugin-daemonset-rtk85" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:48.622346   21482 pod_ready.go:39] duration metric: took 1m48.53765719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:09:48.622366   21482 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:09:48.622398   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:09:48.622443   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:09:48.665411   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:48.681068   21482 cri.go:89] found id: "deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:09:48.681087   21482 cri.go:89] found id: ""
	I1028 17:09:48.681103   21482 logs.go:282] 1 containers: [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231]
	I1028 17:09:48.681146   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.689469   21482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:09:48.689523   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:09:48.732164   21482 cri.go:89] found id: "c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:09:48.732181   21482 cri.go:89] found id: ""
	I1028 17:09:48.732188   21482 logs.go:282] 1 containers: [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7]
	I1028 17:09:48.732231   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.736269   21482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:09:48.736325   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:09:48.771595   21482 cri.go:89] found id: "614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:09:48.771619   21482 cri.go:89] found id: ""
	I1028 17:09:48.771626   21482 logs.go:282] 1 containers: [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d]
	I1028 17:09:48.771669   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.775879   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:09:48.775927   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:09:48.813607   21482 cri.go:89] found id: "2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:09:48.813635   21482 cri.go:89] found id: ""
	I1028 17:09:48.813645   21482 logs.go:282] 1 containers: [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0]
	I1028 17:09:48.813691   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.818152   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:09:48.818202   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:09:48.854915   21482 cri.go:89] found id: "2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:09:48.854934   21482 cri.go:89] found id: ""
	I1028 17:09:48.854941   21482 logs.go:282] 1 containers: [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924]
	I1028 17:09:48.854978   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.859147   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:09:48.859206   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:09:48.900971   21482 cri.go:89] found id: "d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:09:48.900993   21482 cri.go:89] found id: ""
	I1028 17:09:48.901000   21482 logs.go:282] 1 containers: [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951]
	I1028 17:09:48.901045   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.905230   21482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:09:48.905300   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:09:48.949084   21482 cri.go:89] found id: ""
	I1028 17:09:48.949106   21482 logs.go:282] 0 containers: []
	W1028 17:09:48.949113   21482 logs.go:284] No container was found matching "kindnet"
	I1028 17:09:48.949121   21482 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:09:48.949136   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:09:49.064804   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:49.086928   21482 logs.go:123] Gathering logs for kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] ...
	I1028 17:09:49.086950   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:09:49.126078   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:49.133053   21482 logs.go:123] Gathering logs for kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] ...
	I1028 17:09:49.133073   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:09:49.176844   21482 logs.go:123] Gathering logs for kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] ...
	I1028 17:09:49.176869   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:09:49.214094   21482 logs.go:123] Gathering logs for kubelet ...
	I1028 17:09:49.214117   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:09:49.267628   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:49.267806   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:09:49.267926   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:49.268072   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:09:49.306933   21482 logs.go:123] Gathering logs for etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] ...
	I1028 17:09:49.306970   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:09:49.373869   21482 logs.go:123] Gathering logs for coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] ...
	I1028 17:09:49.373895   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:09:49.415480   21482 logs.go:123] Gathering logs for kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] ...
	I1028 17:09:49.415507   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:09:49.478999   21482 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:09:49.479027   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:09:49.568173   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:49.625453   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:50.064764   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:50.125255   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:50.512731   21482 logs.go:123] Gathering logs for container status ...
	I1028 17:09:50.512775   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:09:50.565714   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:50.577132   21482 logs.go:123] Gathering logs for dmesg ...
	I1028 17:09:50.577157   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:09:50.601252   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:09:50.601278   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:09:50.601334   21482 out.go:270] X Problems detected in kubelet:
	W1028 17:09:50.601355   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:50.601363   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:09:50.601375   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:50.601390   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:09:50.601396   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:09:50.601406   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:09:50.626350   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:51.064650   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:51.126549   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:51.564437   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:51.625395   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:52.075607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:52.125778   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:52.565329   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:52.626204   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:53.065344   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:53.125803   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:53.565561   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:53.625727   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:54.064746   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:54.125120   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:54.564058   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:54.625455   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:55.064192   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:55.125572   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:55.565216   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:55.625501   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:56.066055   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:56.125506   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:56.565102   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:56.628873   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:57.065255   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:57.125877   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:57.565594   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:57.625149   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:58.064763   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:58.125969   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:58.563979   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:58.625194   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:59.064312   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:59.125712   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:59.565493   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:59.626447   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:00.064669   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:00.126469   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:00.564110   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:00.602569   21482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:10:00.622556   21482 api_server.go:72] duration metric: took 2m8.097343833s to wait for apiserver process to appear ...
	I1028 17:10:00.622579   21482 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:10:00.622613   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:00.622673   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:00.625854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:00.661753   21482 cri.go:89] found id: "deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:00.661769   21482 cri.go:89] found id: ""
	I1028 17:10:00.661778   21482 logs.go:282] 1 containers: [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231]
	I1028 17:10:00.661835   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.668326   21482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:00.668383   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:00.713173   21482 cri.go:89] found id: "c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:00.713199   21482 cri.go:89] found id: ""
	I1028 17:10:00.713206   21482 logs.go:282] 1 containers: [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7]
	I1028 17:10:00.713262   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.717355   21482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:00.717404   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:00.756433   21482 cri.go:89] found id: "614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:00.756460   21482 cri.go:89] found id: ""
	I1028 17:10:00.756483   21482 logs.go:282] 1 containers: [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d]
	I1028 17:10:00.756539   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.760590   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:00.760650   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:00.809191   21482 cri.go:89] found id: "2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:00.809220   21482 cri.go:89] found id: ""
	I1028 17:10:00.809230   21482 logs.go:282] 1 containers: [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0]
	I1028 17:10:00.809282   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.813254   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:00.813307   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:00.854158   21482 cri.go:89] found id: "2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:00.854177   21482 cri.go:89] found id: ""
	I1028 17:10:00.854183   21482 logs.go:282] 1 containers: [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924]
	I1028 17:10:00.854224   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.858277   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:00.858326   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:00.895417   21482 cri.go:89] found id: "d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:00.895437   21482 cri.go:89] found id: ""
	I1028 17:10:00.895445   21482 logs.go:282] 1 containers: [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951]
	I1028 17:10:00.895495   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.899458   21482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:00.899508   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:00.935040   21482 cri.go:89] found id: ""
	I1028 17:10:00.935063   21482 logs.go:282] 0 containers: []
	W1028 17:10:00.935071   21482 logs.go:284] No container was found matching "kindnet"
	I1028 17:10:00.935086   21482 logs.go:123] Gathering logs for kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] ...
	I1028 17:10:00.935097   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:00.986889   21482 logs.go:123] Gathering logs for etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] ...
	I1028 17:10:00.986917   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:01.050984   21482 logs.go:123] Gathering logs for coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] ...
	I1028 17:10:01.051027   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:01.064147   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:01.093641   21482 logs.go:123] Gathering logs for kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] ...
	I1028 17:10:01.093675   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:01.125585   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:01.141526   21482 logs.go:123] Gathering logs for kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] ...
	I1028 17:10:01.141549   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:01.178206   21482 logs.go:123] Gathering logs for kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] ...
	I1028 17:10:01.178228   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:01.236198   21482 logs.go:123] Gathering logs for container status ...
	I1028 17:10:01.236228   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:01.294101   21482 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:01.294130   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:01.308338   21482 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:01.308362   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:01.419465   21482 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:01.419494   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:01.565583   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:01.626153   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:02.065566   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:02.126712   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:02.346895   21482 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:02.346934   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:02.405044   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.405265   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:02.405431   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.405666   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:02.439725   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:02.439749   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:02.439807   21482 out.go:270] X Problems detected in kubelet:
	W1028 17:10:02.439819   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.439826   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:02.439836   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.439841   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:02.439846   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:02.439852   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:02.564714   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:02.625644   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:03.064212   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:03.125405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:03.565108   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:03.625244   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:04.064095   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:04.125481   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:04.564664   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:04.624961   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:05.064130   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:05.125671   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:05.564916   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:05.626114   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:06.064607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:06.125984   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:06.564395   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:06.625715   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:07.064818   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:07.125205   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:07.565116   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:07.625982   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:08.064503   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:08.125916   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:08.564008   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:08.625319   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:09.064270   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:09.127687   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:09.565475   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:09.625592   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:10.064655   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:10.124560   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:10.564806   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:10.624951   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:11.064769   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:11.124909   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:11.564208   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:11.625454   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:12.064643   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:12.125169   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:12.441318   21482 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1028 17:10:12.446440   21482 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1028 17:10:12.447413   21482 api_server.go:141] control plane version: v1.31.2
	I1028 17:10:12.447435   21482 api_server.go:131] duration metric: took 11.82484834s to wait for apiserver health ...
	I1028 17:10:12.447444   21482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:10:12.447468   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:12.447520   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:12.486393   21482 cri.go:89] found id: "deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:12.486419   21482 cri.go:89] found id: ""
	I1028 17:10:12.486428   21482 logs.go:282] 1 containers: [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231]
	I1028 17:10:12.486489   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.490768   21482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:12.490833   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:12.530655   21482 cri.go:89] found id: "c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:12.530675   21482 cri.go:89] found id: ""
	I1028 17:10:12.530684   21482 logs.go:282] 1 containers: [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7]
	I1028 17:10:12.530738   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.534929   21482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:12.534985   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:12.565431   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:12.594370   21482 cri.go:89] found id: "614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:12.594396   21482 cri.go:89] found id: ""
	I1028 17:10:12.594406   21482 logs.go:282] 1 containers: [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d]
	I1028 17:10:12.594457   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.600281   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:12.600346   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:12.626070   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:12.640069   21482 cri.go:89] found id: "2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:12.640089   21482 cri.go:89] found id: ""
	I1028 17:10:12.640096   21482 logs.go:282] 1 containers: [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0]
	I1028 17:10:12.640145   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.644085   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:12.644120   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:12.683856   21482 cri.go:89] found id: "2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:12.683872   21482 cri.go:89] found id: ""
	I1028 17:10:12.683879   21482 logs.go:282] 1 containers: [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924]
	I1028 17:10:12.683927   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.688035   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:12.688100   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:12.725241   21482 cri.go:89] found id: "d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:12.725259   21482 cri.go:89] found id: ""
	I1028 17:10:12.725266   21482 logs.go:282] 1 containers: [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951]
	I1028 17:10:12.725311   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.729385   21482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:12.729451   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:12.779592   21482 cri.go:89] found id: ""
	I1028 17:10:12.779620   21482 logs.go:282] 0 containers: []
	W1028 17:10:12.779630   21482 logs.go:284] No container was found matching "kindnet"
	I1028 17:10:12.779640   21482 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:12.779655   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:12.796430   21482 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:12.796453   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:12.907992   21482 logs.go:123] Gathering logs for kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] ...
	I1028 17:10:12.908024   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:12.960227   21482 logs.go:123] Gathering logs for coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] ...
	I1028 17:10:12.960252   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:12.998312   21482 logs.go:123] Gathering logs for kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] ...
	I1028 17:10:12.998340   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:13.052115   21482 logs.go:123] Gathering logs for kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] ...
	I1028 17:10:13.052143   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:13.064342   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:13.111093   21482 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:13.111119   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:13.126771   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:13.565186   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:13.625150   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:14.036892   21482 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:14.036932   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 17:10:14.063799   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1028 17:10:14.110874   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.111053   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:14.111177   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.111327   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:14.125976   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:14.146814   21482 logs.go:123] Gathering logs for kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] ...
	I1028 17:10:14.146839   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:14.191625   21482 logs.go:123] Gathering logs for container status ...
	I1028 17:10:14.191650   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:14.303274   21482 logs.go:123] Gathering logs for etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] ...
	I1028 17:10:14.303319   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:14.384065   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:14.384094   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:14.384145   21482 out.go:270] X Problems detected in kubelet:
	W1028 17:10:14.384157   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.384162   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:14.384169   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.384176   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:14.384181   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:14.384185   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:14.564496   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:14.625988   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:15.063783   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:15.124703   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:15.569363   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:15.626057   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:16.067784   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:16.125792   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:16.563851   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:16.624740   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:17.064741   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:17.124946   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:17.563814   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:17.625743   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:18.317822   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:18.318723   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:18.564050   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:18.625852   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:19.064866   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:19.125591   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:19.564135   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:19.625369   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:20.064107   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:20.125577   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:20.564582   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:20.626499   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:21.065283   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:21.165262   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:21.565269   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:21.625513   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:22.065266   21482 kapi.go:107] duration metric: took 2m22.005281338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 17:10:22.125670   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:22.626265   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:23.126980   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:23.625719   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:24.125180   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:24.393674   21482 system_pods.go:59] 18 kube-system pods found
	I1028 17:10:24.393704   21482 system_pods.go:61] "amd-gpu-device-plugin-cmh8f" [5e0752de-e01b-4c91-989c-235728654d63] Running
	I1028 17:10:24.393709   21482 system_pods.go:61] "coredns-7c65d6cfc9-znpww" [5d9f893c-87ee-4a07-8ca0-7fed06690855] Running
	I1028 17:10:24.393714   21482 system_pods.go:61] "csi-hostpath-attacher-0" [ae387c41-3c73-426e-8a23-9836bb70b04c] Running
	I1028 17:10:24.393718   21482 system_pods.go:61] "csi-hostpath-resizer-0" [4f427e09-9338-4c6c-9187-448f71011f7d] Running
	I1028 17:10:24.393721   21482 system_pods.go:61] "csi-hostpathplugin-bj7bv" [ac75459b-cd05-42f9-9cdb-a2a16e61251d] Running
	I1028 17:10:24.393724   21482 system_pods.go:61] "etcd-addons-186035" [7759663a-5012-4639-889f-de52909f8a06] Running
	I1028 17:10:24.393727   21482 system_pods.go:61] "kube-apiserver-addons-186035" [42a946b2-0ce0-490f-8279-657d7f0f8172] Running
	I1028 17:10:24.393731   21482 system_pods.go:61] "kube-controller-manager-addons-186035" [175b2784-a103-4f52-8d45-137cf16ab3d0] Running
	I1028 17:10:24.393734   21482 system_pods.go:61] "kube-ingress-dns-minikube" [9018f101-e082-4dea-bf69-3e8a31a66ae8] Running
	I1028 17:10:24.393738   21482 system_pods.go:61] "kube-proxy-qhnsh" [a82fd776-0217-40e3-a973-146eb6cb0c5a] Running
	I1028 17:10:24.393740   21482 system_pods.go:61] "kube-scheduler-addons-186035" [6aced9ea-3f64-41a1-bbb0-f3fda6396aa7] Running
	I1028 17:10:24.393743   21482 system_pods.go:61] "metrics-server-84c5f94fbc-6vwqq" [2a6e6b1d-eaec-41b1-96c8-a3b0444088ec] Running
	I1028 17:10:24.393747   21482 system_pods.go:61] "nvidia-device-plugin-daemonset-rtk85" [cf1f792a-317b-462d-bd89-3d40fc15ae2e] Running
	I1028 17:10:24.393752   21482 system_pods.go:61] "registry-66c9cd494c-zzlqq" [b84d4f13-3ad1-4d7c-81fc-5def543dae51] Running
	I1028 17:10:24.393759   21482 system_pods.go:61] "registry-proxy-7nj9m" [783bc207-34a0-49f6-a31b-d358ca0aa6e3] Running
	I1028 17:10:24.393764   21482 system_pods.go:61] "snapshot-controller-56fcc65765-p7p8n" [2c816687-c0da-413a-a2e6-7491aad1e60b] Running
	I1028 17:10:24.393769   21482 system_pods.go:61] "snapshot-controller-56fcc65765-rm96g" [82f57471-8403-417f-be39-44be24e4b5cf] Running
	I1028 17:10:24.393776   21482 system_pods.go:61] "storage-provisioner" [c8b798cc-678e-4c24-9e8e-d8e87d5b7be4] Running
	I1028 17:10:24.393783   21482 system_pods.go:74] duration metric: took 11.946333127s to wait for pod list to return data ...
	I1028 17:10:24.393797   21482 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:10:24.396200   21482 default_sa.go:45] found service account: "default"
	I1028 17:10:24.396215   21482 default_sa.go:55] duration metric: took 2.413648ms for default service account to be created ...
	I1028 17:10:24.396222   21482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:10:24.404331   21482 system_pods.go:86] 18 kube-system pods found
	I1028 17:10:24.404351   21482 system_pods.go:89] "amd-gpu-device-plugin-cmh8f" [5e0752de-e01b-4c91-989c-235728654d63] Running
	I1028 17:10:24.404356   21482 system_pods.go:89] "coredns-7c65d6cfc9-znpww" [5d9f893c-87ee-4a07-8ca0-7fed06690855] Running
	I1028 17:10:24.404361   21482 system_pods.go:89] "csi-hostpath-attacher-0" [ae387c41-3c73-426e-8a23-9836bb70b04c] Running
	I1028 17:10:24.404364   21482 system_pods.go:89] "csi-hostpath-resizer-0" [4f427e09-9338-4c6c-9187-448f71011f7d] Running
	I1028 17:10:24.404367   21482 system_pods.go:89] "csi-hostpathplugin-bj7bv" [ac75459b-cd05-42f9-9cdb-a2a16e61251d] Running
	I1028 17:10:24.404370   21482 system_pods.go:89] "etcd-addons-186035" [7759663a-5012-4639-889f-de52909f8a06] Running
	I1028 17:10:24.404374   21482 system_pods.go:89] "kube-apiserver-addons-186035" [42a946b2-0ce0-490f-8279-657d7f0f8172] Running
	I1028 17:10:24.404377   21482 system_pods.go:89] "kube-controller-manager-addons-186035" [175b2784-a103-4f52-8d45-137cf16ab3d0] Running
	I1028 17:10:24.404388   21482 system_pods.go:89] "kube-ingress-dns-minikube" [9018f101-e082-4dea-bf69-3e8a31a66ae8] Running
	I1028 17:10:24.404396   21482 system_pods.go:89] "kube-proxy-qhnsh" [a82fd776-0217-40e3-a973-146eb6cb0c5a] Running
	I1028 17:10:24.404399   21482 system_pods.go:89] "kube-scheduler-addons-186035" [6aced9ea-3f64-41a1-bbb0-f3fda6396aa7] Running
	I1028 17:10:24.404402   21482 system_pods.go:89] "metrics-server-84c5f94fbc-6vwqq" [2a6e6b1d-eaec-41b1-96c8-a3b0444088ec] Running
	I1028 17:10:24.404406   21482 system_pods.go:89] "nvidia-device-plugin-daemonset-rtk85" [cf1f792a-317b-462d-bd89-3d40fc15ae2e] Running
	I1028 17:10:24.404409   21482 system_pods.go:89] "registry-66c9cd494c-zzlqq" [b84d4f13-3ad1-4d7c-81fc-5def543dae51] Running
	I1028 17:10:24.404412   21482 system_pods.go:89] "registry-proxy-7nj9m" [783bc207-34a0-49f6-a31b-d358ca0aa6e3] Running
	I1028 17:10:24.404415   21482 system_pods.go:89] "snapshot-controller-56fcc65765-p7p8n" [2c816687-c0da-413a-a2e6-7491aad1e60b] Running
	I1028 17:10:24.404419   21482 system_pods.go:89] "snapshot-controller-56fcc65765-rm96g" [82f57471-8403-417f-be39-44be24e4b5cf] Running
	I1028 17:10:24.404423   21482 system_pods.go:89] "storage-provisioner" [c8b798cc-678e-4c24-9e8e-d8e87d5b7be4] Running
	I1028 17:10:24.404429   21482 system_pods.go:126] duration metric: took 8.203232ms to wait for k8s-apps to be running ...
	I1028 17:10:24.404437   21482 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:10:24.404488   21482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:10:24.424164   21482 system_svc.go:56] duration metric: took 19.720749ms WaitForService to wait for kubelet
	I1028 17:10:24.424183   21482 kubeadm.go:582] duration metric: took 2m31.898978217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:10:24.424199   21482 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:10:24.427142   21482 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:10:24.427164   21482 node_conditions.go:123] node cpu capacity is 2
	I1028 17:10:24.427176   21482 node_conditions.go:105] duration metric: took 2.971407ms to run NodePressure ...
	I1028 17:10:24.427187   21482 start.go:241] waiting for startup goroutines ...
	I1028 17:10:24.625336   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:25.125716   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:25.626072   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:26.126525   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:26.625021   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:27.125392   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:27.626372   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:28.126438   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:28.626186   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:29.125441   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:29.626176   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:30.125507   21482 kapi.go:107] duration metric: took 2m26.503556416s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 17:10:30.127026   21482 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-186035 cluster.
	I1028 17:10:30.128208   21482 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 17:10:30.129290   21482 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 17:10:30.130406   21482 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1028 17:10:30.131442   21482 addons.go:510] duration metric: took 2m37.606202627s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner default-storageclass amd-gpu-device-plugin storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1028 17:10:30.131478   21482 start.go:246] waiting for cluster config update ...
	I1028 17:10:30.131496   21482 start.go:255] writing updated cluster config ...
	I1028 17:10:30.131714   21482 ssh_runner.go:195] Run: rm -f paused
	I1028 17:10:30.182008   21482 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:10:30.183504   21482 out.go:177] * Done! kubectl is now configured to use "addons-186035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.791529760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135617791505745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=119b767b-8818-43c2-a656-1e680af37aec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.792032930Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60534105-1770-4dd2-a1f1-7bd3683cd5c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.792098820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60534105-1770-4dd2-a1f1-7bd3683cd5c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.792438698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e0a4d20230cb5db88512f29b2186843dd592661365ca46dd7f3aa0d2eb11ce,PodSandboxId:e4003c3546ef165b0afa116a347d7c3a8e5a9f8f6c9856e01dd695b81fd66a53,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730135420958617626,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-56fsc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c0b30184-02cf-4552-8371-d24852a42bc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2e50465484ed219334717864cbb2be10539a8e1987e392d0fe3ee57b9ebe4902,PodSandboxId:61ada3741ce4e9be09e055d882e502b0dcd3be20c3243d653f020b693638d483,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730135350968673087,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9pthp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bd1c92cb-1e81-424a-9b1c-f192364d7c82,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14721c7d436e076c6fed67cad08666463d65cf2cae904f6e2366ae285fab77c0,PodSandboxId:359e83230de10b41f346ec75cbb70b1bee32d00df34190b55a6161e817bf36aa,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730135350762162883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wf6gm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad333a97-8566-4f68-af9e-e531e10262d5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6659fb119911f4dceaa6f98e4ec7cfbc9896e29e4ee776e738dcc04239dc85ed,PodSandboxId:50cd2c38ee09fcb93d3ac4a9ee5f662afa1a13450c43e9c56c7eb46ac3a11f31,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730135306552136986,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9018f101-e082-4dea-bf69-3e8a31a66ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c2
8978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43
b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd0920
12f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=60534105-1770-4dd2-a1f1-7bd3683cd5c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.833671241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73db2ab1-32d5-418e-959a-18366e10dc1d name=/runtime.v1.RuntimeService/Version
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.833741898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73db2ab1-32d5-418e-959a-18366e10dc1d name=/runtime.v1.RuntimeService/Version
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.834941795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34942b88-8c6a-41f4-9656-66c258dfecb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.836190441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135617836166256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34942b88-8c6a-41f4-9656-66c258dfecb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.836887999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fcdf2ed-f8e2-42c6-b876-b0e1ecb1cc25 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.836961711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fcdf2ed-f8e2-42c6-b876-b0e1ecb1cc25 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.837273260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e0a4d20230cb5db88512f29b2186843dd592661365ca46dd7f3aa0d2eb11ce,PodSandboxId:e4003c3546ef165b0afa116a347d7c3a8e5a9f8f6c9856e01dd695b81fd66a53,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730135420958617626,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-56fsc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c0b30184-02cf-4552-8371-d24852a42bc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2e50465484ed219334717864cbb2be10539a8e1987e392d0fe3ee57b9ebe4902,PodSandboxId:61ada3741ce4e9be09e055d882e502b0dcd3be20c3243d653f020b693638d483,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730135350968673087,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9pthp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bd1c92cb-1e81-424a-9b1c-f192364d7c82,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14721c7d436e076c6fed67cad08666463d65cf2cae904f6e2366ae285fab77c0,PodSandboxId:359e83230de10b41f346ec75cbb70b1bee32d00df34190b55a6161e817bf36aa,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730135350762162883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wf6gm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad333a97-8566-4f68-af9e-e531e10262d5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6659fb119911f4dceaa6f98e4ec7cfbc9896e29e4ee776e738dcc04239dc85ed,PodSandboxId:50cd2c38ee09fcb93d3ac4a9ee5f662afa1a13450c43e9c56c7eb46ac3a11f31,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730135306552136986,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9018f101-e082-4dea-bf69-3e8a31a66ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c2
8978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43
b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd0920
12f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=6fcdf2ed-f8e2-42c6-b876-b0e1ecb1cc25 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.877683909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afdb216c-632b-492a-b6d6-b22445792b35 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.877773586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afdb216c-632b-492a-b6d6-b22445792b35 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.879189746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e466a999-096c-4209-b33d-d2994040e9cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.880333243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135617880309458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e466a999-096c-4209-b33d-d2994040e9cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.880995786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fee75d34-ff89-43f7-8051-1c435838eb6a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.881067724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fee75d34-ff89-43f7-8051-1c435838eb6a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.881552956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e0a4d20230cb5db88512f29b2186843dd592661365ca46dd7f3aa0d2eb11ce,PodSandboxId:e4003c3546ef165b0afa116a347d7c3a8e5a9f8f6c9856e01dd695b81fd66a53,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730135420958617626,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-56fsc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c0b30184-02cf-4552-8371-d24852a42bc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2e50465484ed219334717864cbb2be10539a8e1987e392d0fe3ee57b9ebe4902,PodSandboxId:61ada3741ce4e9be09e055d882e502b0dcd3be20c3243d653f020b693638d483,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730135350968673087,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9pthp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bd1c92cb-1e81-424a-9b1c-f192364d7c82,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14721c7d436e076c6fed67cad08666463d65cf2cae904f6e2366ae285fab77c0,PodSandboxId:359e83230de10b41f346ec75cbb70b1bee32d00df34190b55a6161e817bf36aa,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730135350762162883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wf6gm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad333a97-8566-4f68-af9e-e531e10262d5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6659fb119911f4dceaa6f98e4ec7cfbc9896e29e4ee776e738dcc04239dc85ed,PodSandboxId:50cd2c38ee09fcb93d3ac4a9ee5f662afa1a13450c43e9c56c7eb46ac3a11f31,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730135306552136986,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9018f101-e082-4dea-bf69-3e8a31a66ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c2
8978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43
b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd0920
12f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=fee75d34-ff89-43f7-8051-1c435838eb6a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.917674995Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fecdb9d8-331e-413a-bfe2-fe4af4ad6251 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.917751423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fecdb9d8-331e-413a-bfe2-fe4af4ad6251 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.918675857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de3fdcf9-118a-497b-a56c-f8cdb8336c92 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.919983638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135617919959709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de3fdcf9-118a-497b-a56c-f8cdb8336c92 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.920580133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eb97ecc-4d59-4236-beb7-28469e0adb63 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.920648708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eb97ecc-4d59-4236-beb7-28469e0adb63 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:13:37 addons-186035 crio[659]: time="2024-10-28 17:13:37.920989572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e0a4d20230cb5db88512f29b2186843dd592661365ca46dd7f3aa0d2eb11ce,PodSandboxId:e4003c3546ef165b0afa116a347d7c3a8e5a9f8f6c9856e01dd695b81fd66a53,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730135420958617626,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-56fsc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c0b30184-02cf-4552-8371-d24852a42bc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2e50465484ed219334717864cbb2be10539a8e1987e392d0fe3ee57b9ebe4902,PodSandboxId:61ada3741ce4e9be09e055d882e502b0dcd3be20c3243d653f020b693638d483,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730135350968673087,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9pthp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bd1c92cb-1e81-424a-9b1c-f192364d7c82,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14721c7d436e076c6fed67cad08666463d65cf2cae904f6e2366ae285fab77c0,PodSandboxId:359e83230de10b41f346ec75cbb70b1bee32d00df34190b55a6161e817bf36aa,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730135350762162883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wf6gm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad333a97-8566-4f68-af9e-e531e10262d5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6659fb119911f4dceaa6f98e4ec7cfbc9896e29e4ee776e738dcc04239dc85ed,PodSandboxId:50cd2c38ee09fcb93d3ac4a9ee5f662afa1a13450c43e9c56c7eb46ac3a11f31,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730135306552136986,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9018f101-e082-4dea-bf69-3e8a31a66ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c2
8978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43
b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd0920
12f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=8eb97ecc-4d59-4236-beb7-28469e0adb63 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c42d8554f886       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   33e247f619c96       nginx
	ebd763de10a43       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   5756284ab9ccf       busybox
	d5e0a4d20230c       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   e4003c3546ef1       ingress-nginx-controller-5f85ff4588-56fsc
	2e50465484ed2       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   61ada3741ce4e       ingress-nginx-admission-patch-9pthp
	14721c7d436e0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   359e83230de10       ingress-nginx-admission-create-wf6gm
	1bc2426fe3759       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   848fd1506d897       metrics-server-84c5f94fbc-6vwqq
	6659fb119911f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   50cd2c38ee09f       kube-ingress-dns-minikube
	cc0ea2ebc9079       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   aa9c667b13f3a       amd-gpu-device-plugin-cmh8f
	118031ba1a771       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   08f45bd79f068       storage-provisioner
	614f092a6a9e0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   5c0ea02eda904       coredns-7c65d6cfc9-znpww
	2369bc3d165e3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   291c01c08a0dd       kube-proxy-qhnsh
	d09c6cd8e8adc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   e8ce4f650c58e       kube-controller-manager-addons-186035
	2b168fbe99e03       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   98efa1525a459       kube-scheduler-addons-186035
	c537af4c03503       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   6c46a6fcf568d       etcd-addons-186035
	deca3062b168e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   e8b2750535d7b       kube-apiserver-addons-186035
	
	
	==> coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] <==
	[INFO] 10.244.0.8:58125 - 32671 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000084056s
	[INFO] 10.244.0.8:58125 - 31979 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080303s
	[INFO] 10.244.0.8:58125 - 62169 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000422114s
	[INFO] 10.244.0.8:58125 - 47118 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00020091s
	[INFO] 10.244.0.8:58125 - 23599 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000075064s
	[INFO] 10.244.0.8:58125 - 32266 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000090595s
	[INFO] 10.244.0.8:58125 - 36963 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000081497s
	[INFO] 10.244.0.8:45922 - 22589 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000066977s
	[INFO] 10.244.0.8:45922 - 22878 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000037457s
	[INFO] 10.244.0.8:50056 - 19459 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041345s
	[INFO] 10.244.0.8:50056 - 19183 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041114s
	[INFO] 10.244.0.8:46153 - 56996 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030161s
	[INFO] 10.244.0.8:46153 - 56761 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033125s
	[INFO] 10.244.0.8:34720 - 31713 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000030709s
	[INFO] 10.244.0.8:34720 - 31486 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035067s
	[INFO] 10.244.0.23:41957 - 64750 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000554254s
	[INFO] 10.244.0.23:52161 - 61091 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000787937s
	[INFO] 10.244.0.23:52935 - 43753 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102007s
	[INFO] 10.244.0.23:57545 - 33589 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000264069s
	[INFO] 10.244.0.23:54404 - 41348 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117377s
	[INFO] 10.244.0.23:42623 - 2656 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000263488s
	[INFO] 10.244.0.23:45278 - 27000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004271635s
	[INFO] 10.244.0.23:59079 - 46578 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.004707365s
	[INFO] 10.244.0.26:45662 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001264407s
	[INFO] 10.244.0.26:53186 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000342409s
	
	
	==> describe nodes <==
	Name:               addons-186035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-186035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=addons-186035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_07_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-186035
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-186035
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:13:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:11:51 +0000   Mon, 28 Oct 2024 17:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:11:51 +0000   Mon, 28 Oct 2024 17:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:11:51 +0000   Mon, 28 Oct 2024 17:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:11:51 +0000   Mon, 28 Oct 2024 17:07:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    addons-186035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 05b49f430d1b4db0b0d719b6f9779dde
	  System UUID:                05b49f43-0d1b-4db0-b0d7-19b6f9779dde
	  Boot ID:                    61e165df-592d-406c-abb1-782959670d56
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     hello-world-app-55bf9c44b4-jklgj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-56fsc    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m39s
	  kube-system                 amd-gpu-device-plugin-cmh8f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 coredns-7c65d6cfc9-znpww                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m46s
	  kube-system                 etcd-addons-186035                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m53s
	  kube-system                 kube-apiserver-addons-186035                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-controller-manager-addons-186035        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-proxy-qhnsh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-scheduler-addons-186035                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 metrics-server-84c5f94fbc-6vwqq              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m44s  kube-proxy       
	  Normal  Starting                 5m52s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m51s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m51s  kubelet          Node addons-186035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s  kubelet          Node addons-186035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s  kubelet          Node addons-186035 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m50s  kubelet          Node addons-186035 status is now: NodeReady
	  Normal  RegisteredNode           5m47s  node-controller  Node addons-186035 event: Registered Node addons-186035 in Controller
	
	
	==> dmesg <==
	[  +0.096914] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.664234] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.147381] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +4.853550] kauditd_printk_skb: 113 callbacks suppressed
	[Oct28 17:08] kauditd_printk_skb: 163 callbacks suppressed
	[  +8.487748] kauditd_printk_skb: 57 callbacks suppressed
	[ +32.742273] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 17:09] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.126833] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.434606] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.289413] kauditd_printk_skb: 28 callbacks suppressed
	[Oct28 17:10] kauditd_printk_skb: 3 callbacks suppressed
	[  +8.793506] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.205726] kauditd_printk_skb: 13 callbacks suppressed
	[ +16.063287] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 17:11] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.477576] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.010649] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.011520] kauditd_printk_skb: 45 callbacks suppressed
	[  +7.453289] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.958909] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.750746] kauditd_printk_skb: 38 callbacks suppressed
	[Oct28 17:12] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.922265] kauditd_printk_skb: 57 callbacks suppressed
	[Oct28 17:13] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] <==
	{"level":"warn","ts":"2024-10-28T17:09:14.476672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.413917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T17:09:14.476765Z","caller":"traceutil/trace.go:171","msg":"trace[2121358349] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1027; }","duration":"354.514001ms","start":"2024-10-28T17:09:14.122238Z","end":"2024-10-28T17:09:14.476752Z","steps":["trace[2121358349] 'agreement among raft nodes before linearized reading'  (duration: 354.39216ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.476862Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.122209Z","time spent":"354.645474ms","remote":"127.0.0.1:58058","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":12,"response size":30,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	{"level":"warn","ts":"2024-10-28T17:09:14.477181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.315469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:09:14.477285Z","caller":"traceutil/trace.go:171","msg":"trace[721181895] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1027; }","duration":"375.42189ms","start":"2024-10-28T17:09:14.101855Z","end":"2024-10-28T17:09:14.477277Z","steps":["trace[721181895] 'agreement among raft nodes before linearized reading'  (duration: 375.243903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.477320Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.101826Z","time spent":"375.489256ms","remote":"127.0.0.1:58074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-28T17:09:14.478461Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.153479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:09:14.480491Z","caller":"traceutil/trace.go:171","msg":"trace[624477916] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1027; }","duration":"402.16463ms","start":"2024-10-28T17:09:14.078298Z","end":"2024-10-28T17:09:14.480462Z","steps":["trace[624477916] 'agreement among raft nodes before linearized reading'  (duration: 400.13886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.480767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.078256Z","time spent":"402.498013ms","remote":"127.0.0.1:58074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-28T17:09:14.480547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"401.877568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-6vwqq\" ","response":"range_response_count:1 size:4564"}
	{"level":"info","ts":"2024-10-28T17:09:14.481013Z","caller":"traceutil/trace.go:171","msg":"trace[588713358] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-6vwqq; range_end:; response_count:1; response_revision:1027; }","duration":"402.678565ms","start":"2024-10-28T17:09:14.078326Z","end":"2024-10-28T17:09:14.481005Z","steps":["trace[588713358] 'agreement among raft nodes before linearized reading'  (duration: 399.469595ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.481057Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.078311Z","time spent":"402.737496ms","remote":"127.0.0.1:58074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4587,"request content":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-6vwqq\" "}
	{"level":"info","ts":"2024-10-28T17:09:57.386676Z","caller":"traceutil/trace.go:171","msg":"trace[1893733462] transaction","detail":"{read_only:false; response_revision:1168; number_of_response:1; }","duration":"269.910014ms","start":"2024-10-28T17:09:57.116753Z","end":"2024-10-28T17:09:57.386663Z","steps":["trace[1893733462] 'process raft request'  (duration: 269.311991ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:10:18.289247Z","caller":"traceutil/trace.go:171","msg":"trace[1543686112] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1239; }","duration":"252.746722ms","start":"2024-10-28T17:10:18.036485Z","end":"2024-10-28T17:10:18.289232Z","steps":["trace[1543686112] 'read index received'  (duration: 252.628617ms)","trace[1543686112] 'applied index is now lower than readState.Index'  (duration: 117.708µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T17:10:18.289515Z","caller":"traceutil/trace.go:171","msg":"trace[1386694662] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"287.374118ms","start":"2024-10-28T17:10:18.002126Z","end":"2024-10-28T17:10:18.289500Z","steps":["trace[1386694662] 'process raft request'  (duration: 287.024229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:10:18.289594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.759328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:10:18.290266Z","caller":"traceutil/trace.go:171","msg":"trace[1924234898] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"191.484592ms","start":"2024-10-28T17:10:18.098770Z","end":"2024-10-28T17:10:18.290255Z","steps":["trace[1924234898] 'agreement among raft nodes before linearized reading'  (duration: 190.740891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:10:18.289652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.166755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:10:18.290366Z","caller":"traceutil/trace.go:171","msg":"trace[224914482] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"253.872974ms","start":"2024-10-28T17:10:18.036481Z","end":"2024-10-28T17:10:18.290354Z","steps":["trace[224914482] 'agreement among raft nodes before linearized reading'  (duration: 253.15913ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:11:02.896945Z","caller":"traceutil/trace.go:171","msg":"trace[1650506679] transaction","detail":"{read_only:false; response_revision:1392; number_of_response:1; }","duration":"363.693976ms","start":"2024-10-28T17:11:02.533222Z","end":"2024-10-28T17:11:02.896916Z","steps":["trace[1650506679] 'process raft request'  (duration: 363.360466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:11:02.898061Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:11:02.533209Z","time spent":"364.037705ms","remote":"127.0.0.1:32806","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1380 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-28T17:12:13.379135Z","caller":"traceutil/trace.go:171","msg":"trace[641105911] linearizableReadLoop","detail":"{readStateIndex:1946; appliedIndex:1945; }","duration":"206.174523ms","start":"2024-10-28T17:12:13.172941Z","end":"2024-10-28T17:12:13.379115Z","steps":["trace[641105911] 'read index received'  (duration: 206.032011ms)","trace[641105911] 'applied index is now lower than readState.Index'  (duration: 142.122µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T17:12:13.379287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.31552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-resizer-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:12:13.379309Z","caller":"traceutil/trace.go:171","msg":"trace[1805250495] range","detail":"{range_begin:/registry/roles/kube-system/external-resizer-cfg; range_end:; response_count:0; response_revision:1868; }","duration":"206.389516ms","start":"2024-10-28T17:12:13.172914Z","end":"2024-10-28T17:12:13.379304Z","steps":["trace[1805250495] 'agreement among raft nodes before linearized reading'  (duration: 206.272199ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:12:13.379693Z","caller":"traceutil/trace.go:171","msg":"trace[63220696] transaction","detail":"{read_only:false; response_revision:1868; number_of_response:1; }","duration":"281.43557ms","start":"2024-10-28T17:12:13.098221Z","end":"2024-10-28T17:12:13.379657Z","steps":["trace[63220696] 'process raft request'  (duration: 280.79272ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:13:38 up 6 min,  0 users,  load average: 0.32, 0.80, 0.46
	Linux addons-186035 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] <==
	E1028 17:09:48.532473       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.210.69:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.210.69:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.210.69:443: connect: connection refused" logger="UnhandledError"
	I1028 17:09:48.640184       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 17:10:46.563644       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8443->192.168.39.1:57084: use of closed network connection
	E1028 17:10:46.747002       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8443->192.168.39.1:57098: use of closed network connection
	I1028 17:10:55.925121       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.194.185"}
	I1028 17:11:07.346841       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 17:11:08.375464       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 17:11:13.018175       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 17:11:13.196569       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.35.159"}
	I1028 17:11:49.231716       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1028 17:11:51.635296       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1028 17:12:08.764581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.764693       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.781206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.781267       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.814601       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.814658       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.920200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.920763       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.927363       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.927464       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 17:12:09.927812       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 17:12:09.927876       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 17:12:09.941482       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 17:13:36.771002       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.115.220"}
	
	
	==> kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] <==
	W1028 17:12:26.313505       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:26.313602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:27.560456       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:27.560565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:31.138967       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:31.139020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:31.388138       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:31.388192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1028 17:12:31.461131       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W1028 17:12:46.098594       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:46.098794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:51.287761       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:51.287878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:12:55.860174       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:12:55.860230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:09.909233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:09.909436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:26.376992       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:26.377053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:13:31.477686       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:13:31.477748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1028 17:13:36.605018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.789011ms"
	I1028 17:13:36.616778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.672583ms"
	I1028 17:13:36.637913       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="21.073178ms"
	I1028 17:13:36.638007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.404µs"
	
	
	==> kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 17:07:53.707184       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 17:07:53.721588       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.15"]
	E1028 17:07:53.721667       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:07:53.796656       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 17:07:53.796707       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 17:07:53.796742       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:07:53.801187       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:07:53.801543       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:07:53.801569       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:07:53.802664       1 config.go:199] "Starting service config controller"
	I1028 17:07:53.802681       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:07:53.802712       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:07:53.802716       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:07:53.808441       1 config.go:328] "Starting node config controller"
	I1028 17:07:53.808455       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:07:53.902800       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:07:53.902869       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:07:53.908652       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] <==
	W1028 17:07:44.369744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:44.369773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:44.369816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:07:44.369844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:44.369952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 17:07:44.370028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.183567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.183685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.247520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:07:45.247608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.342907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 17:07:45.342985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.351805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:07:45.352649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.353589       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 17:07:45.353647       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 17:07:45.357613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.357668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.461875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.461906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.495562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:07:45.495611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.523940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.524042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 17:07:48.461053       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587055    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="hostpath"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587065    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="csi-provisioner"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587122    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c816687-c0da-413a-a2e6-7491aad1e60b" containerName="volume-snapshot-controller"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587131    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30ddb07f-6a04-4053-830d-43b6b63e81e0" containerName="local-path-provisioner"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587137    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82f57471-8403-417f-be39-44be24e4b5cf" containerName="volume-snapshot-controller"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587143    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9521741b-1fd2-4c79-904e-5c0457733369" containerName="task-pv-container"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587149    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae387c41-3c73-426e-8a23-9836bb70b04c" containerName="csi-attacher"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587155    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="liveness-probe"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: E1028 17:13:36.587161    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="csi-snapshotter"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587260    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c816687-c0da-413a-a2e6-7491aad1e60b" containerName="volume-snapshot-controller"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587269    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9e1c49-df24-41a8-b38a-cf64b68716ab" containerName="yakd"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587312    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="csi-external-health-monitor-controller"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587318    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="9521741b-1fd2-4c79-904e-5c0457733369" containerName="task-pv-container"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587325    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ddb07f-6a04-4053-830d-43b6b63e81e0" containerName="local-path-provisioner"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587330    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="liveness-probe"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587334    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="82f57471-8403-417f-be39-44be24e4b5cf" containerName="volume-snapshot-controller"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587338    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="node-driver-registrar"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587343    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="csi-snapshotter"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587349    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f427e09-9338-4c6c-9187-448f71011f7d" containerName="csi-resizer"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587354    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="hostpath"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587358    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac75459b-cd05-42f9-9cdb-a2a16e61251d" containerName="csi-provisioner"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.587362    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae387c41-3c73-426e-8a23-9836bb70b04c" containerName="csi-attacher"
	Oct 28 17:13:36 addons-186035 kubelet[1202]: I1028 17:13:36.736894    1202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85cqh\" (UniqueName: \"kubernetes.io/projected/f4b78547-1aed-4e78-9a66-db282c1161d5-kube-api-access-85cqh\") pod \"hello-world-app-55bf9c44b4-jklgj\" (UID: \"f4b78547-1aed-4e78-9a66-db282c1161d5\") " pod="default/hello-world-app-55bf9c44b4-jklgj"
	Oct 28 17:13:37 addons-186035 kubelet[1202]: E1028 17:13:37.221823    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135617221459504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:13:37 addons-186035 kubelet[1202]: E1028 17:13:37.221847    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135617221459504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6] <==
	I1028 17:08:00.253709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 17:08:00.337586       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 17:08:00.337658       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 17:08:00.398162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 17:08:00.398893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d2ab1a8-d417-4ce4-b56c-459b458982ae", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-186035_7e4b825e-773b-4140-bb78-7cb2a9a6ef9e became leader
	I1028 17:08:00.398933       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-186035_7e4b825e-773b-4140-bb78-7cb2a9a6ef9e!
	I1028 17:08:00.811127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-186035_7e4b825e-773b-4140-bb78-7cb2a9a6ef9e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-186035 -n addons-186035
helpers_test.go:261: (dbg) Run:  kubectl --context addons-186035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-jklgj ingress-nginx-admission-create-wf6gm ingress-nginx-admission-patch-9pthp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-186035 describe pod hello-world-app-55bf9c44b4-jklgj ingress-nginx-admission-create-wf6gm ingress-nginx-admission-patch-9pthp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-186035 describe pod hello-world-app-55bf9c44b4-jklgj ingress-nginx-admission-create-wf6gm ingress-nginx-admission-patch-9pthp: exit status 1 (65.269435ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-jklgj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-186035/192.168.39.15
	Start Time:       Mon, 28 Oct 2024 17:13:36 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85cqh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-85cqh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-jklgj to addons-186035
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wf6gm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9pthp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-186035 describe pod hello-world-app-55bf9c44b4-jklgj ingress-nginx-admission-create-wf6gm ingress-nginx-admission-patch-9pthp: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 addons disable ingress-dns --alsologtostderr -v=1: (1.528840142s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 addons disable ingress --alsologtostderr -v=1: (7.777878611s)
--- FAIL: TestAddons/parallel/Ingress (155.58s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (366.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.509271ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-6vwqq" [2a6e6b1d-eaec-41b1-96c8-a3b0444088ec] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003250642s
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (84.557577ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 3m7.218493953s

                                                
                                                
** /stderr **
I1028 17:11:01.220295   20680 retry.go:31] will retry after 3.216244079s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (63.877845ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 3m10.49937416s

                                                
                                                
** /stderr **
I1028 17:11:04.500918   20680 retry.go:31] will retry after 2.50295748s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (63.964734ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 3m13.06737599s

                                                
                                                
** /stderr **
I1028 17:11:07.069076   20680 retry.go:31] will retry after 9.886807936s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (62.321781ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 3m23.016839454s

                                                
                                                
** /stderr **
I1028 17:11:17.018568   20680 retry.go:31] will retry after 11.524095691s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (62.030583ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 3m34.603159575s

                                                
                                                
** /stderr **
I1028 17:11:28.605001   20680 retry.go:31] will retry after 16.521708263s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (66.360066ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 3m51.192026907s

                                                
                                                
** /stderr **
I1028 17:11:45.193558   20680 retry.go:31] will retry after 30.195602957s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (63.189332ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 4m21.45271318s

                                                
                                                
** /stderr **
I1028 17:12:15.454285   20680 retry.go:31] will retry after 39.610392503s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (60.317704ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 5m1.123161359s

                                                
                                                
** /stderr **
I1028 17:12:55.125334   20680 retry.go:31] will retry after 29.631169623s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (60.541328ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 5m30.816576903s

                                                
                                                
** /stderr **
I1028 17:13:24.818311   20680 retry.go:31] will retry after 31.164700163s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (62.249002ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 6m2.045779488s

                                                
                                                
** /stderr **
I1028 17:13:56.047464   20680 retry.go:31] will retry after 1m6.709832893s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (65.245541ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 7m8.822839816s

                                                
                                                
** /stderr **
I1028 17:15:02.824952   20680 retry.go:31] will retry after 53.810504245s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (60.78917ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 8m2.69713204s

                                                
                                                
** /stderr **
I1028 17:15:56.698851   20680 retry.go:31] will retry after 1m2.153890393s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-186035 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-186035 top pods -n kube-system: exit status 1 (60.946096ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-cmh8f, age: 9m4.912531361s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-186035 -n addons-186035
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 logs -n 25: (1.157089341s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-565697                                                                     | download-only-565697 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| delete  | -p download-only-852823                                                                     | download-only-852823 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-523787 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | binary-mirror-523787                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-523787                                                                     | binary-mirror-523787 | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:07 UTC |
	| addons  | enable dashboard -p                                                                         | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-186035                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC |                     |
	|         | addons-186035                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-186035 --wait=true                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:07 UTC | 28 Oct 24 17:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:10 UTC | 28 Oct 24 17:10 UTC |
	|         | -p addons-186035                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-186035 ip                                                                            | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-186035 ssh curl -s                                                                   | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-186035 ssh cat                                                                       | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:11 UTC |
	|         | /opt/local-path-provisioner/pvc-055034d5-d0f2-4684-852f-71b9bf776565_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:11 UTC | 28 Oct 24 17:12 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:12 UTC | 28 Oct 24 17:12 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons                                                                        | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:12 UTC | 28 Oct 24 17:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:12 UTC | 28 Oct 24 17:12 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-186035 ip                                                                            | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:13 UTC |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:13 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-186035 addons disable                                                                | addons-186035        | jenkins | v1.34.0 | 28 Oct 24 17:13 UTC | 28 Oct 24 17:13 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:07:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:07:06.023262   21482 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:07:06.023369   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:06.023377   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:07:06.023381   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:07:06.023542   21482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:07:06.024065   21482 out.go:352] Setting JSON to false
	I1028 17:07:06.024887   21482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2969,"bootTime":1730132257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:07:06.024998   21482 start.go:139] virtualization: kvm guest
	I1028 17:07:06.026865   21482 out.go:177] * [addons-186035] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:07:06.028386   21482 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:07:06.028406   21482 notify.go:220] Checking for updates...
	I1028 17:07:06.030791   21482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:07:06.032241   21482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:07:06.033385   21482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:07:06.034487   21482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:07:06.035640   21482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:07:06.037075   21482 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:07:06.068523   21482 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 17:07:06.069666   21482 start.go:297] selected driver: kvm2
	I1028 17:07:06.069678   21482 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:07:06.069688   21482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:07:06.070336   21482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:07:06.070395   21482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:07:06.084040   21482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:07:06.084078   21482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:07:06.084336   21482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:07:06.084364   21482 cni.go:84] Creating CNI manager for ""
	I1028 17:07:06.084408   21482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:07:06.084418   21482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 17:07:06.084457   21482 start.go:340] cluster config:
	{Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:06.084596   21482 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:07:06.086134   21482 out.go:177] * Starting "addons-186035" primary control-plane node in "addons-186035" cluster
	I1028 17:07:06.087292   21482 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:06.087316   21482 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:07:06.087328   21482 cache.go:56] Caching tarball of preloaded images
	I1028 17:07:06.087390   21482 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:07:06.087402   21482 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:07:06.087681   21482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/config.json ...
	I1028 17:07:06.087709   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/config.json: {Name:mk56e20b9d6db6d349c73c0ce52b4e46b329f082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:06.087845   21482 start.go:360] acquireMachinesLock for addons-186035: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:07:06.087901   21482 start.go:364] duration metric: took 40.37µs to acquireMachinesLock for "addons-186035"
	I1028 17:07:06.087921   21482 start.go:93] Provisioning new machine with config: &{Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:07:06.087978   21482 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 17:07:06.089587   21482 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 17:07:06.089694   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:06.089742   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:06.102800   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I1028 17:07:06.103134   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:06.103648   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:06.103668   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:06.104012   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:06.104202   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:06.104335   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:06.104495   21482 start.go:159] libmachine.API.Create for "addons-186035" (driver="kvm2")
	I1028 17:07:06.104533   21482 client.go:168] LocalClient.Create starting
	I1028 17:07:06.104567   21482 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:07:06.209214   21482 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:07:06.360262   21482 main.go:141] libmachine: Running pre-create checks...
	I1028 17:07:06.360284   21482 main.go:141] libmachine: (addons-186035) Calling .PreCreateCheck
	I1028 17:07:06.360779   21482 main.go:141] libmachine: (addons-186035) Calling .GetConfigRaw
	I1028 17:07:06.361169   21482 main.go:141] libmachine: Creating machine...
	I1028 17:07:06.361183   21482 main.go:141] libmachine: (addons-186035) Calling .Create
	I1028 17:07:06.361330   21482 main.go:141] libmachine: (addons-186035) Creating KVM machine...
	I1028 17:07:06.362564   21482 main.go:141] libmachine: (addons-186035) DBG | found existing default KVM network
	I1028 17:07:06.363270   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.363133   21504 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a40}
	I1028 17:07:06.363290   21482 main.go:141] libmachine: (addons-186035) DBG | created network xml: 
	I1028 17:07:06.363309   21482 main.go:141] libmachine: (addons-186035) DBG | <network>
	I1028 17:07:06.363320   21482 main.go:141] libmachine: (addons-186035) DBG |   <name>mk-addons-186035</name>
	I1028 17:07:06.363330   21482 main.go:141] libmachine: (addons-186035) DBG |   <dns enable='no'/>
	I1028 17:07:06.363339   21482 main.go:141] libmachine: (addons-186035) DBG |   
	I1028 17:07:06.363354   21482 main.go:141] libmachine: (addons-186035) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 17:07:06.363369   21482 main.go:141] libmachine: (addons-186035) DBG |     <dhcp>
	I1028 17:07:06.363383   21482 main.go:141] libmachine: (addons-186035) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 17:07:06.363393   21482 main.go:141] libmachine: (addons-186035) DBG |     </dhcp>
	I1028 17:07:06.363403   21482 main.go:141] libmachine: (addons-186035) DBG |   </ip>
	I1028 17:07:06.363410   21482 main.go:141] libmachine: (addons-186035) DBG |   
	I1028 17:07:06.363422   21482 main.go:141] libmachine: (addons-186035) DBG | </network>
	I1028 17:07:06.363432   21482 main.go:141] libmachine: (addons-186035) DBG | 
	I1028 17:07:06.368447   21482 main.go:141] libmachine: (addons-186035) DBG | trying to create private KVM network mk-addons-186035 192.168.39.0/24...
	I1028 17:07:06.429592   21482 main.go:141] libmachine: (addons-186035) DBG | private KVM network mk-addons-186035 192.168.39.0/24 created
	I1028 17:07:06.429626   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.429547   21504 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:07:06.429645   21482 main.go:141] libmachine: (addons-186035) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035 ...
	I1028 17:07:06.429675   21482 main.go:141] libmachine: (addons-186035) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:07:06.429694   21482 main.go:141] libmachine: (addons-186035) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:07:06.703268   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.703138   21504 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa...
	I1028 17:07:06.820321   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.820217   21504 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/addons-186035.rawdisk...
	I1028 17:07:06.820368   21482 main.go:141] libmachine: (addons-186035) DBG | Writing magic tar header
	I1028 17:07:06.820382   21482 main.go:141] libmachine: (addons-186035) DBG | Writing SSH key tar header
	I1028 17:07:06.820398   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:06.820348   21504 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035 ...
	I1028 17:07:06.820496   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035
	I1028 17:07:06.820520   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035 (perms=drwx------)
	I1028 17:07:06.820533   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:07:06.820546   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:07:06.820555   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:07:06.820566   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:07:06.820575   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:07:06.820591   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:07:06.820605   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:07:06.820617   21482 main.go:141] libmachine: (addons-186035) DBG | Checking permissions on dir: /home
	I1028 17:07:06.820629   21482 main.go:141] libmachine: (addons-186035) DBG | Skipping /home - not owner
	I1028 17:07:06.820645   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:07:06.820662   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:07:06.820677   21482 main.go:141] libmachine: (addons-186035) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:07:06.820687   21482 main.go:141] libmachine: (addons-186035) Creating domain...
	I1028 17:07:06.821616   21482 main.go:141] libmachine: (addons-186035) define libvirt domain using xml: 
	I1028 17:07:06.821652   21482 main.go:141] libmachine: (addons-186035) <domain type='kvm'>
	I1028 17:07:06.821662   21482 main.go:141] libmachine: (addons-186035)   <name>addons-186035</name>
	I1028 17:07:06.821675   21482 main.go:141] libmachine: (addons-186035)   <memory unit='MiB'>4000</memory>
	I1028 17:07:06.821699   21482 main.go:141] libmachine: (addons-186035)   <vcpu>2</vcpu>
	I1028 17:07:06.821713   21482 main.go:141] libmachine: (addons-186035)   <features>
	I1028 17:07:06.821742   21482 main.go:141] libmachine: (addons-186035)     <acpi/>
	I1028 17:07:06.821765   21482 main.go:141] libmachine: (addons-186035)     <apic/>
	I1028 17:07:06.821777   21482 main.go:141] libmachine: (addons-186035)     <pae/>
	I1028 17:07:06.821793   21482 main.go:141] libmachine: (addons-186035)     
	I1028 17:07:06.821806   21482 main.go:141] libmachine: (addons-186035)   </features>
	I1028 17:07:06.821818   21482 main.go:141] libmachine: (addons-186035)   <cpu mode='host-passthrough'>
	I1028 17:07:06.821829   21482 main.go:141] libmachine: (addons-186035)   
	I1028 17:07:06.821850   21482 main.go:141] libmachine: (addons-186035)   </cpu>
	I1028 17:07:06.821861   21482 main.go:141] libmachine: (addons-186035)   <os>
	I1028 17:07:06.821873   21482 main.go:141] libmachine: (addons-186035)     <type>hvm</type>
	I1028 17:07:06.821885   21482 main.go:141] libmachine: (addons-186035)     <boot dev='cdrom'/>
	I1028 17:07:06.821895   21482 main.go:141] libmachine: (addons-186035)     <boot dev='hd'/>
	I1028 17:07:06.821906   21482 main.go:141] libmachine: (addons-186035)     <bootmenu enable='no'/>
	I1028 17:07:06.821914   21482 main.go:141] libmachine: (addons-186035)   </os>
	I1028 17:07:06.821925   21482 main.go:141] libmachine: (addons-186035)   <devices>
	I1028 17:07:06.821936   21482 main.go:141] libmachine: (addons-186035)     <disk type='file' device='cdrom'>
	I1028 17:07:06.821956   21482 main.go:141] libmachine: (addons-186035)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/boot2docker.iso'/>
	I1028 17:07:06.821971   21482 main.go:141] libmachine: (addons-186035)       <target dev='hdc' bus='scsi'/>
	I1028 17:07:06.821983   21482 main.go:141] libmachine: (addons-186035)       <readonly/>
	I1028 17:07:06.821991   21482 main.go:141] libmachine: (addons-186035)     </disk>
	I1028 17:07:06.822001   21482 main.go:141] libmachine: (addons-186035)     <disk type='file' device='disk'>
	I1028 17:07:06.822014   21482 main.go:141] libmachine: (addons-186035)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:07:06.822033   21482 main.go:141] libmachine: (addons-186035)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/addons-186035.rawdisk'/>
	I1028 17:07:06.822048   21482 main.go:141] libmachine: (addons-186035)       <target dev='hda' bus='virtio'/>
	I1028 17:07:06.822065   21482 main.go:141] libmachine: (addons-186035)     </disk>
	I1028 17:07:06.822082   21482 main.go:141] libmachine: (addons-186035)     <interface type='network'>
	I1028 17:07:06.822095   21482 main.go:141] libmachine: (addons-186035)       <source network='mk-addons-186035'/>
	I1028 17:07:06.822108   21482 main.go:141] libmachine: (addons-186035)       <model type='virtio'/>
	I1028 17:07:06.822119   21482 main.go:141] libmachine: (addons-186035)     </interface>
	I1028 17:07:06.822127   21482 main.go:141] libmachine: (addons-186035)     <interface type='network'>
	I1028 17:07:06.822139   21482 main.go:141] libmachine: (addons-186035)       <source network='default'/>
	I1028 17:07:06.822159   21482 main.go:141] libmachine: (addons-186035)       <model type='virtio'/>
	I1028 17:07:06.822171   21482 main.go:141] libmachine: (addons-186035)     </interface>
	I1028 17:07:06.822183   21482 main.go:141] libmachine: (addons-186035)     <serial type='pty'>
	I1028 17:07:06.822194   21482 main.go:141] libmachine: (addons-186035)       <target port='0'/>
	I1028 17:07:06.822203   21482 main.go:141] libmachine: (addons-186035)     </serial>
	I1028 17:07:06.822219   21482 main.go:141] libmachine: (addons-186035)     <console type='pty'>
	I1028 17:07:06.822231   21482 main.go:141] libmachine: (addons-186035)       <target type='serial' port='0'/>
	I1028 17:07:06.822243   21482 main.go:141] libmachine: (addons-186035)     </console>
	I1028 17:07:06.822257   21482 main.go:141] libmachine: (addons-186035)     <rng model='virtio'>
	I1028 17:07:06.822271   21482 main.go:141] libmachine: (addons-186035)       <backend model='random'>/dev/random</backend>
	I1028 17:07:06.822280   21482 main.go:141] libmachine: (addons-186035)     </rng>
	I1028 17:07:06.822290   21482 main.go:141] libmachine: (addons-186035)     
	I1028 17:07:06.822298   21482 main.go:141] libmachine: (addons-186035)     
	I1028 17:07:06.822309   21482 main.go:141] libmachine: (addons-186035)   </devices>
	I1028 17:07:06.822317   21482 main.go:141] libmachine: (addons-186035) </domain>
	I1028 17:07:06.822334   21482 main.go:141] libmachine: (addons-186035) 
	I1028 17:07:06.827859   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:4f:55:51 in network default
	I1028 17:07:06.828371   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:06.828389   21482 main.go:141] libmachine: (addons-186035) Ensuring networks are active...
	I1028 17:07:06.828934   21482 main.go:141] libmachine: (addons-186035) Ensuring network default is active
	I1028 17:07:06.829243   21482 main.go:141] libmachine: (addons-186035) Ensuring network mk-addons-186035 is active
	I1028 17:07:06.829685   21482 main.go:141] libmachine: (addons-186035) Getting domain xml...
	I1028 17:07:06.830337   21482 main.go:141] libmachine: (addons-186035) Creating domain...
	I1028 17:07:08.201932   21482 main.go:141] libmachine: (addons-186035) Waiting to get IP...
	I1028 17:07:08.202806   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:08.203095   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:08.203142   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:08.203086   21504 retry.go:31] will retry after 211.26097ms: waiting for machine to come up
	I1028 17:07:08.415296   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:08.415717   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:08.415746   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:08.415668   21504 retry.go:31] will retry after 338.97837ms: waiting for machine to come up
	I1028 17:07:08.756084   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:08.756484   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:08.756515   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:08.756416   21504 retry.go:31] will retry after 431.773016ms: waiting for machine to come up
	I1028 17:07:09.189885   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:09.190293   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:09.190318   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:09.190254   21504 retry.go:31] will retry after 507.772359ms: waiting for machine to come up
	I1028 17:07:09.699830   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:09.700184   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:09.700209   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:09.700134   21504 retry.go:31] will retry after 758.007253ms: waiting for machine to come up
	I1028 17:07:10.459957   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:10.460389   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:10.460414   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:10.460340   21504 retry.go:31] will retry after 903.570429ms: waiting for machine to come up
	I1028 17:07:11.364881   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:11.365302   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:11.365361   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:11.365296   21504 retry.go:31] will retry after 1.054833216s: waiting for machine to come up
	I1028 17:07:12.421406   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:12.421827   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:12.421850   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:12.421780   21504 retry.go:31] will retry after 1.246115446s: waiting for machine to come up
	I1028 17:07:13.670059   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:13.670436   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:13.670472   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:13.670400   21504 retry.go:31] will retry after 1.569122093s: waiting for machine to come up
	I1028 17:07:15.241605   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:15.241983   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:15.242015   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:15.241925   21504 retry.go:31] will retry after 1.64438524s: waiting for machine to come up
	I1028 17:07:16.888910   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:16.889350   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:16.889379   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:16.889308   21504 retry.go:31] will retry after 2.156287404s: waiting for machine to come up
	I1028 17:07:19.046824   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:19.047200   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:19.047225   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:19.047151   21504 retry.go:31] will retry after 3.084774607s: waiting for machine to come up
	I1028 17:07:22.133426   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:22.133774   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:22.133806   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:22.133714   21504 retry.go:31] will retry after 4.405522494s: waiting for machine to come up
	I1028 17:07:26.540979   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:26.541414   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find current IP address of domain addons-186035 in network mk-addons-186035
	I1028 17:07:26.541437   21482 main.go:141] libmachine: (addons-186035) DBG | I1028 17:07:26.541388   21504 retry.go:31] will retry after 4.107542395s: waiting for machine to come up
	I1028 17:07:30.653515   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.653928   21482 main.go:141] libmachine: (addons-186035) Found IP for machine: 192.168.39.15
	I1028 17:07:30.653955   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has current primary IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.653962   21482 main.go:141] libmachine: (addons-186035) Reserving static IP address...
	I1028 17:07:30.654400   21482 main.go:141] libmachine: (addons-186035) DBG | unable to find host DHCP lease matching {name: "addons-186035", mac: "52:54:00:fd:e8:0a", ip: "192.168.39.15"} in network mk-addons-186035
	I1028 17:07:30.721605   21482 main.go:141] libmachine: (addons-186035) DBG | Getting to WaitForSSH function...
	I1028 17:07:30.721636   21482 main.go:141] libmachine: (addons-186035) Reserved static IP address: 192.168.39.15
	I1028 17:07:30.721668   21482 main.go:141] libmachine: (addons-186035) Waiting for SSH to be available...
	I1028 17:07:30.723800   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.724146   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:30.724170   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.724369   21482 main.go:141] libmachine: (addons-186035) DBG | Using SSH client type: external
	I1028 17:07:30.724407   21482 main.go:141] libmachine: (addons-186035) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa (-rw-------)
	I1028 17:07:30.724437   21482 main.go:141] libmachine: (addons-186035) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:07:30.724451   21482 main.go:141] libmachine: (addons-186035) DBG | About to run SSH command:
	I1028 17:07:30.724461   21482 main.go:141] libmachine: (addons-186035) DBG | exit 0
	I1028 17:07:30.848262   21482 main.go:141] libmachine: (addons-186035) DBG | SSH cmd err, output: <nil>: 
	I1028 17:07:30.848490   21482 main.go:141] libmachine: (addons-186035) KVM machine creation complete!
	I1028 17:07:30.848760   21482 main.go:141] libmachine: (addons-186035) Calling .GetConfigRaw
	I1028 17:07:30.849275   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:30.849435   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:30.849576   21482 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:07:30.849591   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:30.850766   21482 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:07:30.850777   21482 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:07:30.850782   21482 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:07:30.850787   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:30.852722   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.853081   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:30.853110   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.853219   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:30.853390   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.853513   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.853649   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:30.853783   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:30.854020   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:30.854039   21482 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:07:30.951656   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:07:30.951680   21482 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:07:30.951690   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:30.954210   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.954524   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:30.954547   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:30.954704   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:30.954900   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.955051   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:30.955178   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:30.955320   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:30.955523   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:30.955537   21482 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:07:31.052931   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:07:31.053013   21482 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:07:31.053025   21482 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:07:31.053034   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:31.053278   21482 buildroot.go:166] provisioning hostname "addons-186035"
	I1028 17:07:31.053307   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:31.053453   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.055934   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.056239   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.056256   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.056367   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.056528   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.056677   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.056786   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.056943   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.057126   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.057141   21482 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-186035 && echo "addons-186035" | sudo tee /etc/hostname
	I1028 17:07:31.170205   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-186035
	
	I1028 17:07:31.170231   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.172999   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.173320   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.173343   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.173539   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.173707   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.173842   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.173941   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.174083   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.174716   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.174746   21482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-186035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-186035/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-186035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:07:31.280812   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:07:31.280841   21482 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:07:31.280857   21482 buildroot.go:174] setting up certificates
	I1028 17:07:31.280867   21482 provision.go:84] configureAuth start
	I1028 17:07:31.280875   21482 main.go:141] libmachine: (addons-186035) Calling .GetMachineName
	I1028 17:07:31.281143   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:31.283705   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.284047   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.284069   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.284261   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.286261   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.286575   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.286600   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.286729   21482 provision.go:143] copyHostCerts
	I1028 17:07:31.286800   21482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:07:31.286912   21482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:07:31.286973   21482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:07:31.287032   21482 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.addons-186035 san=[127.0.0.1 192.168.39.15 addons-186035 localhost minikube]
	I1028 17:07:31.489724   21482 provision.go:177] copyRemoteCerts
	I1028 17:07:31.489778   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:07:31.489799   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.492266   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.492638   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.492665   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.492827   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.493005   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.493161   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.493279   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:31.570093   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:07:31.592765   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:07:31.615119   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:07:31.637067   21482 provision.go:87] duration metric: took 356.189922ms to configureAuth
	I1028 17:07:31.637092   21482 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:07:31.637286   21482 config.go:182] Loaded profile config "addons-186035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:31.637432   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.639858   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.640166   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.640194   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.640360   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.640551   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.640712   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.640828   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.640964   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.641159   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.641174   21482 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:07:31.852812   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:07:31.852837   21482 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:07:31.852847   21482 main.go:141] libmachine: (addons-186035) Calling .GetURL
	I1028 17:07:31.854449   21482 main.go:141] libmachine: (addons-186035) DBG | Using libvirt version 6000000
	I1028 17:07:31.856748   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.857085   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.857111   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.857223   21482 main.go:141] libmachine: Docker is up and running!
	I1028 17:07:31.857238   21482 main.go:141] libmachine: Reticulating splines...
	I1028 17:07:31.857244   21482 client.go:171] duration metric: took 25.752701898s to LocalClient.Create
	I1028 17:07:31.857257   21482 start.go:167] duration metric: took 25.752765567s to libmachine.API.Create "addons-186035"
	I1028 17:07:31.857267   21482 start.go:293] postStartSetup for "addons-186035" (driver="kvm2")
	I1028 17:07:31.857276   21482 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:07:31.857291   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:31.857513   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:07:31.857540   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.859746   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.860015   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.860033   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.860216   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.860365   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.860546   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.860689   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:31.938580   21482 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:07:31.942681   21482 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:07:31.942709   21482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:07:31.942794   21482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:07:31.942826   21482 start.go:296] duration metric: took 85.553049ms for postStartSetup
	I1028 17:07:31.942865   21482 main.go:141] libmachine: (addons-186035) Calling .GetConfigRaw
	I1028 17:07:31.943430   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:31.945814   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.946185   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.946212   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.946399   21482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/config.json ...
	I1028 17:07:31.946565   21482 start.go:128] duration metric: took 25.85857794s to createHost
	I1028 17:07:31.946586   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:31.948702   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.949032   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:31.949056   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:31.949161   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:31.949312   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.949441   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:31.949544   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:31.949663   21482 main.go:141] libmachine: Using SSH client type: native
	I1028 17:07:31.949816   21482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 17:07:31.949826   21482 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:07:32.044690   21482 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730135252.013899420
	
	I1028 17:07:32.044714   21482 fix.go:216] guest clock: 1730135252.013899420
	I1028 17:07:32.044723   21482 fix.go:229] Guest: 2024-10-28 17:07:32.01389942 +0000 UTC Remote: 2024-10-28 17:07:31.946575948 +0000 UTC m=+25.957944270 (delta=67.323472ms)
	I1028 17:07:32.044760   21482 fix.go:200] guest clock delta is within tolerance: 67.323472ms
	I1028 17:07:32.044767   21482 start.go:83] releasing machines lock for "addons-186035", held for 25.956855526s
	I1028 17:07:32.044785   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.045042   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:32.047595   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.047988   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:32.048009   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.048189   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.048675   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.048816   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:32.048916   21482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:07:32.048958   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:32.048988   21482 ssh_runner.go:195] Run: cat /version.json
	I1028 17:07:32.049007   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:32.051330   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.051636   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.051669   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:32.051712   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.051793   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:32.051958   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:32.052097   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:32.052120   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:32.052136   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:32.052275   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:32.052289   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:32.052413   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:32.052539   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:32.052677   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:32.149172   21482 ssh_runner.go:195] Run: systemctl --version
	I1028 17:07:32.155030   21482 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:07:32.312931   21482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:07:32.318889   21482 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:07:32.318945   21482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:07:32.334582   21482 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:07:32.334601   21482 start.go:495] detecting cgroup driver to use...
	I1028 17:07:32.334646   21482 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:07:32.350793   21482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:07:32.364418   21482 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:07:32.364454   21482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:07:32.377495   21482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:07:32.390831   21482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:07:32.499414   21482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:07:32.656723   21482 docker.go:233] disabling docker service ...
	I1028 17:07:32.656777   21482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:07:32.670576   21482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:07:32.683025   21482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:07:32.789823   21482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:07:32.893875   21482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:07:32.907462   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:07:32.924915   21482 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:07:32.924962   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.935334   21482 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:07:32.935409   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.945690   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.955838   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.966144   21482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:07:32.976679   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:32.986688   21482 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:33.002765   21482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:07:33.012790   21482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:07:33.021810   21482 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:07:33.021851   21482 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:07:33.034728   21482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:07:33.043688   21482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:33.150990   21482 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:07:33.245922   21482 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:07:33.246032   21482 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:07:33.250528   21482 start.go:563] Will wait 60s for crictl version
	I1028 17:07:33.250580   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:07:33.254243   21482 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:07:33.291843   21482 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:07:33.291971   21482 ssh_runner.go:195] Run: crio --version
	I1028 17:07:33.318401   21482 ssh_runner.go:195] Run: crio --version
	I1028 17:07:33.347151   21482 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:07:33.348495   21482 main.go:141] libmachine: (addons-186035) Calling .GetIP
	I1028 17:07:33.350869   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:33.351144   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:33.351174   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:33.351340   21482 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:07:33.355278   21482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:33.367754   21482 kubeadm.go:883] updating cluster {Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:07:33.367854   21482 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:07:33.367893   21482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:33.402912   21482 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 17:07:33.402969   21482 ssh_runner.go:195] Run: which lz4
	I1028 17:07:33.406802   21482 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 17:07:33.410880   21482 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 17:07:33.410904   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 17:07:34.643209   21482 crio.go:462] duration metric: took 1.236426115s to copy over tarball
	I1028 17:07:34.643286   21482 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 17:07:36.700078   21482 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.05675708s)
	I1028 17:07:36.700100   21482 crio.go:469] duration metric: took 2.056863264s to extract the tarball
	I1028 17:07:36.700108   21482 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 17:07:36.737841   21482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:07:36.778730   21482 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:07:36.778758   21482 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:07:36.778769   21482 kubeadm.go:934] updating node { 192.168.39.15 8443 v1.31.2 crio true true} ...
	I1028 17:07:36.778864   21482 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-186035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:07:36.778928   21482 ssh_runner.go:195] Run: crio config
	I1028 17:07:36.822774   21482 cni.go:84] Creating CNI manager for ""
	I1028 17:07:36.822800   21482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:07:36.822811   21482 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:07:36.822839   21482 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-186035 NodeName:addons-186035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:07:36.822989   21482 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-186035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:07:36.823065   21482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:07:36.832952   21482 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:07:36.833018   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 17:07:36.842254   21482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 17:07:36.857816   21482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:07:36.873386   21482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1028 17:07:36.888638   21482 ssh_runner.go:195] Run: grep 192.168.39.15	control-plane.minikube.internal$ /etc/hosts
	I1028 17:07:36.892006   21482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:07:36.903391   21482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:37.007615   21482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:07:37.023383   21482 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035 for IP: 192.168.39.15
	I1028 17:07:37.023403   21482 certs.go:194] generating shared ca certs ...
	I1028 17:07:37.023417   21482 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.023555   21482 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:07:37.094339   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt ...
	I1028 17:07:37.094367   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt: {Name:mkada548ed9e0c555f18d752b1d48c2553324d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.094547   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key ...
	I1028 17:07:37.094566   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key: {Name:mk7617196eb13bec3904d40a6eb678c962caa127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.094662   21482 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:07:37.296322   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt ...
	I1028 17:07:37.296350   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt: {Name:mk907c2ff38a41d71da690c87000fdec457eedf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.296536   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key ...
	I1028 17:07:37.296550   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key: {Name:mkd5769b54aa6510303440ab3c3d5990a21d9179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.296644   21482 certs.go:256] generating profile certs ...
	I1028 17:07:37.296714   21482 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.key
	I1028 17:07:37.296731   21482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt with IP's: []
	I1028 17:07:37.365116   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt ...
	I1028 17:07:37.365142   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: {Name:mk487e652aecd824a7f47239181ca89c76ddaa90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.365304   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.key ...
	I1028 17:07:37.365319   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.key: {Name:mka84a25792ede6a47b729c3ceff8f0cb7111375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.365419   21482 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a
	I1028 17:07:37.365437   21482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.15]
	I1028 17:07:37.473713   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a ...
	I1028 17:07:37.473743   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a: {Name:mk248109e4e732c5f785720069c3ec8f2de866d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.473908   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a ...
	I1028 17:07:37.473934   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a: {Name:mk50db299e4807dbfdcb03b09ebc15fd48dd67b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.474065   21482 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt.bb79669a -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt
	I1028 17:07:37.474183   21482 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key.bb79669a -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key
	I1028 17:07:37.474258   21482 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key
	I1028 17:07:37.474280   21482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt with IP's: []
	I1028 17:07:37.734531   21482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt ...
	I1028 17:07:37.734567   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt: {Name:mk442dfe2507a428f23025393ef9a62e46c131dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.734747   21482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key ...
	I1028 17:07:37.734762   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key: {Name:mkdb5af6742887099ce4f26b9a16b971f8da3993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:37.734951   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:07:37.735002   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:07:37.735039   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:07:37.735073   21482 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:07:37.735666   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:07:37.769726   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:07:37.803916   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:07:37.830176   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:07:37.852090   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 17:07:37.873920   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:07:37.895585   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:07:37.916994   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 17:07:37.938543   21482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:07:37.960307   21482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:07:37.975584   21482 ssh_runner.go:195] Run: openssl version
	I1028 17:07:37.981026   21482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:07:37.991252   21482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:37.995565   21482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:37.995613   21482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:07:38.001121   21482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:07:38.011529   21482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:07:38.015267   21482 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:07:38.015315   21482 kubeadm.go:392] StartCluster: {Name:addons-186035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-186035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:07:38.015391   21482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:07:38.015458   21482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:07:38.050576   21482 cri.go:89] found id: ""
	I1028 17:07:38.050640   21482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:07:38.060370   21482 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:07:38.070040   21482 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:07:38.079484   21482 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:07:38.079505   21482 kubeadm.go:157] found existing configuration files:
	
	I1028 17:07:38.079547   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:07:38.088148   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:07:38.088198   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:07:38.097156   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:07:38.105646   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:07:38.105695   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:07:38.114647   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:07:38.123228   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:07:38.123275   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:07:38.132235   21482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:07:38.140745   21482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:07:38.140781   21482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:07:38.149583   21482 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 17:07:38.198633   21482 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:07:38.198755   21482 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:07:38.295379   21482 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:07:38.295469   21482 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:07:38.295562   21482 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:07:38.305391   21482 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:07:38.307678   21482 out.go:235]   - Generating certificates and keys ...
	I1028 17:07:38.307781   21482 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:07:38.307867   21482 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:07:38.406005   21482 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:07:38.616391   21482 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:07:38.695199   21482 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:07:38.889115   21482 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:07:38.990534   21482 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:07:38.990839   21482 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-186035 localhost] and IPs [192.168.39.15 127.0.0.1 ::1]
	I1028 17:07:39.067578   21482 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:07:39.067968   21482 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-186035 localhost] and IPs [192.168.39.15 127.0.0.1 ::1]
	I1028 17:07:39.251059   21482 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:07:39.418889   21482 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:07:39.653294   21482 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:07:39.653553   21482 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:07:39.729619   21482 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:07:40.187636   21482 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:07:40.378175   21482 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:07:40.499205   21482 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:07:40.671893   21482 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:07:40.672418   21482 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:07:40.674801   21482 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:07:40.719322   21482 out.go:235]   - Booting up control plane ...
	I1028 17:07:40.719429   21482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:07:40.719503   21482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:07:40.719576   21482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:07:40.719706   21482 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:07:40.719825   21482 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:07:40.719890   21482 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:07:40.827039   21482 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:07:40.827184   21482 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:07:41.328506   21482 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.776952ms
	I1028 17:07:41.328601   21482 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:07:46.326901   21482 kubeadm.go:310] [api-check] The API server is healthy after 5.00108443s
	I1028 17:07:46.346228   21482 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:07:46.362805   21482 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:07:46.406830   21482 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:07:46.407060   21482 kubeadm.go:310] [mark-control-plane] Marking the node addons-186035 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:07:46.419772   21482 kubeadm.go:310] [bootstrap-token] Using token: dfdzjm.eymlbvu4shoxlmen
	I1028 17:07:46.421017   21482 out.go:235]   - Configuring RBAC rules ...
	I1028 17:07:46.421176   21482 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:07:46.428636   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:07:46.439235   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:07:46.444538   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:07:46.448194   21482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:07:46.454781   21482 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:07:46.732898   21482 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:07:47.172240   21482 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:07:47.732054   21482 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:07:47.738101   21482 kubeadm.go:310] 
	I1028 17:07:47.738181   21482 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:07:47.738194   21482 kubeadm.go:310] 
	I1028 17:07:47.738314   21482 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:07:47.738337   21482 kubeadm.go:310] 
	I1028 17:07:47.738388   21482 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:07:47.738516   21482 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:07:47.738598   21482 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:07:47.738610   21482 kubeadm.go:310] 
	I1028 17:07:47.738679   21482 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:07:47.738691   21482 kubeadm.go:310] 
	I1028 17:07:47.738746   21482 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:07:47.738755   21482 kubeadm.go:310] 
	I1028 17:07:47.738824   21482 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:07:47.738919   21482 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:07:47.739025   21482 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:07:47.739048   21482 kubeadm.go:310] 
	I1028 17:07:47.739166   21482 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:07:47.739281   21482 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:07:47.739297   21482 kubeadm.go:310] 
	I1028 17:07:47.739406   21482 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dfdzjm.eymlbvu4shoxlmen \
	I1028 17:07:47.739553   21482 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 17:07:47.739585   21482 kubeadm.go:310] 	--control-plane 
	I1028 17:07:47.739596   21482 kubeadm.go:310] 
	I1028 17:07:47.739721   21482 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:07:47.739741   21482 kubeadm.go:310] 
	I1028 17:07:47.739851   21482 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dfdzjm.eymlbvu4shoxlmen \
	I1028 17:07:47.739979   21482 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 17:07:47.741855   21482 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:07:47.741886   21482 cni.go:84] Creating CNI manager for ""
	I1028 17:07:47.741896   21482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:07:47.743459   21482 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 17:07:47.744658   21482 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 17:07:47.755072   21482 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 17:07:47.775272   21482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:07:47.775341   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:47.775405   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-186035 minikube.k8s.io/updated_at=2024_10_28T17_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=addons-186035 minikube.k8s.io/primary=true
	I1028 17:07:47.795889   21482 ops.go:34] apiserver oom_adj: -16
	I1028 17:07:47.920087   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:48.420279   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:48.920721   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:49.420583   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:49.920445   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:50.420583   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:50.920910   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:51.420497   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:51.920444   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:52.420920   21482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:07:52.524157   21482 kubeadm.go:1113] duration metric: took 4.748871873s to wait for elevateKubeSystemPrivileges
	I1028 17:07:52.524203   21482 kubeadm.go:394] duration metric: took 14.508889603s to StartCluster
	I1028 17:07:52.524228   21482 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:52.524384   21482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:07:52.524906   21482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:07:52.525153   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:07:52.525175   21482 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:07:52.525245   21482 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 17:07:52.525385   21482 config.go:182] Loaded profile config "addons-186035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:52.525403   21482 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-186035"
	I1028 17:07:52.525401   21482 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-186035"
	I1028 17:07:52.525423   21482 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-186035"
	I1028 17:07:52.525428   21482 addons.go:69] Setting default-storageclass=true in profile "addons-186035"
	I1028 17:07:52.525440   21482 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-186035"
	I1028 17:07:52.525430   21482 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-186035"
	I1028 17:07:52.525465   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525474   21482 addons.go:69] Setting gcp-auth=true in profile "addons-186035"
	I1028 17:07:52.525455   21482 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-186035"
	I1028 17:07:52.525479   21482 addons.go:69] Setting registry=true in profile "addons-186035"
	I1028 17:07:52.525490   21482 addons.go:69] Setting ingress-dns=true in profile "addons-186035"
	I1028 17:07:52.525500   21482 addons.go:69] Setting inspektor-gadget=true in profile "addons-186035"
	I1028 17:07:52.525501   21482 addons.go:69] Setting storage-provisioner=true in profile "addons-186035"
	I1028 17:07:52.525507   21482 addons.go:234] Setting addon ingress-dns=true in "addons-186035"
	I1028 17:07:52.525511   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525513   21482 addons.go:234] Setting addon inspektor-gadget=true in "addons-186035"
	I1028 17:07:52.525513   21482 addons.go:234] Setting addon storage-provisioner=true in "addons-186035"
	I1028 17:07:52.525537   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525545   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525549   21482 addons.go:69] Setting volumesnapshots=true in profile "addons-186035"
	I1028 17:07:52.525564   21482 addons.go:234] Setting addon volumesnapshots=true in "addons-186035"
	I1028 17:07:52.525587   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525854   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525890   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525931   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525949   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525954   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.525391   21482 addons.go:69] Setting yakd=true in profile "addons-186035"
	I1028 17:07:52.525971   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525979   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525983   21482 addons.go:69] Setting ingress=true in profile "addons-186035"
	I1028 17:07:52.525996   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.526046   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.526131   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525473   21482 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-186035"
	I1028 17:07:52.526001   21482 addons.go:234] Setting addon ingress=true in "addons-186035"
	I1028 17:07:52.526504   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525407   21482 addons.go:69] Setting metrics-server=true in profile "addons-186035"
	I1028 17:07:52.526561   21482 addons.go:234] Setting addon metrics-server=true in "addons-186035"
	I1028 17:07:52.526593   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.526593   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.526628   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.526859   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.526887   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.526988   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.527016   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525981   21482 addons.go:234] Setting addon yakd=true in "addons-186035"
	I1028 17:07:52.527088   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.527449   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.527475   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525541   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.527957   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.528016   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.525492   21482 addons.go:234] Setting addon registry=true in "addons-186035"
	I1028 17:07:52.528188   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525463   21482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-186035"
	I1028 17:07:52.525466   21482 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-186035"
	I1028 17:07:52.528345   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525467   21482 addons.go:69] Setting volcano=true in profile "addons-186035"
	I1028 17:07:52.528625   21482 addons.go:234] Setting addon volcano=true in "addons-186035"
	I1028 17:07:52.528686   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525477   21482 addons.go:69] Setting cloud-spanner=true in profile "addons-186035"
	I1028 17:07:52.529112   21482 addons.go:234] Setting addon cloud-spanner=true in "addons-186035"
	I1028 17:07:52.529151   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.525493   21482 mustload.go:65] Loading cluster: addons-186035
	I1028 17:07:52.532522   21482 out.go:177] * Verifying Kubernetes components...
	I1028 17:07:52.533951   21482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:07:52.546594   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1028 17:07:52.546808   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I1028 17:07:52.546840   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1028 17:07:52.547126   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.547252   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.547275   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.547814   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.547832   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.548189   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.548295   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.548314   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.548327   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.548365   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.548880   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1028 17:07:52.548914   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.548921   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.549256   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.549274   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.549311   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.549803   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.549835   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.550082   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.550126   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.550158   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.551703   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I1028 17:07:52.556743   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.556786   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.557035   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.557074   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.557394   21482 config.go:182] Loaded profile config "addons-186035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:07:52.557731   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.557775   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.558263   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.558306   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.558832   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.558877   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.559376   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.559411   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.559886   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.559921   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.560412   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.560444   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.565092   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I1028 17:07:52.565227   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I1028 17:07:52.565629   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.565736   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.565804   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.566349   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.566355   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.566366   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.566370   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.567089   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.567106   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.567164   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.567198   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.567778   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.567811   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.567883   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.567952   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.572197   21482 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-186035"
	I1028 17:07:52.572240   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.572729   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.572759   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.589114   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.589169   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.595230   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I1028 17:07:52.595458   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
	I1028 17:07:52.596462   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.597106   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.597127   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.597593   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I1028 17:07:52.597730   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.597801   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I1028 17:07:52.597954   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.598042   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.598483   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.598636   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.598648   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.599756   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.599857   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I1028 17:07:52.599944   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.600019   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.600427   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.600461   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.600968   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.601491   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.601509   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.601567   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.602073   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.602092   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.602399   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.603012   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.603056   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.603413   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.603475   21482 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 17:07:52.603588   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I1028 17:07:52.603880   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.604083   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.604130   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.604201   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.604592   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.604613   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.604721   21482 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 17:07:52.604739   21482 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 17:07:52.604765   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.604912   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.605125   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.605146   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.605204   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.605460   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.605587   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.606007   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
	I1028 17:07:52.607309   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
	I1028 17:07:52.607828   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.608546   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.608766   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.608792   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.609002   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.609034   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.609107   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.609286   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.609341   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.609459   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.609573   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.609680   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.609977   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I1028 17:07:52.610094   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.610802   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.610998   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.611020   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.611129   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.611451   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.611705   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.612108   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.612385   21482 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 17:07:52.612646   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.612669   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.613230   21482 addons.go:234] Setting addon default-storageclass=true in "addons-186035"
	I1028 17:07:52.613274   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:07:52.613624   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.613663   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.614103   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I1028 17:07:52.614369   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I1028 17:07:52.614587   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.614657   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.614731   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.614768   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 17:07:52.614831   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41759
	I1028 17:07:52.614916   21482 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:07:52.614927   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 17:07:52.614943   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.615363   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.615402   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.616033   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.616047   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.616103   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.616406   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 17:07:52.616421   21482 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 17:07:52.616438   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.616690   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.616709   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.616847   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.617162   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.617619   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.617647   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.617679   21482 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 17:07:52.617773   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.617806   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.618409   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.618892   21482 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:07:52.618907   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 17:07:52.618922   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.620157   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.620179   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.620613   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.621672   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.621698   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.621716   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.621886   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.621920   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.622764   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.623486   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.623599   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.623739   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.623846   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.623944   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.624757   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.624777   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.624803   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I1028 17:07:52.625064   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.625119   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.625455   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.625570   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.625657   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.625676   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.625677   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.625725   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.625858   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.625978   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.626181   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.626196   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.626254   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.626588   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.627119   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.627151   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.645347   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I1028 17:07:52.645945   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.646267   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I1028 17:07:52.646620   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I1028 17:07:52.646796   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.647007   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.647169   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.647180   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.647304   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.647317   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.648413   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.648477   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.648511   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.648534   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.649226   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.649267   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.649720   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:07:52.649761   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:07:52.649978   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1028 17:07:52.649988   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36727
	I1028 17:07:52.649994   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.650281   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.650554   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.650617   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.651037   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.651054   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.651424   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.651608   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.651926   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.651943   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.652351   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.652563   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.653103   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.654166   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.654378   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:52.654399   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:52.656097   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I1028 17:07:52.656102   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:52.656129   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:52.656134   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:52.656139   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:52.656143   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:52.656369   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:52.656399   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:52.656407   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	W1028 17:07:52.656570   21482 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 17:07:52.657184   21482 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 17:07:52.657442   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.657651   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I1028 17:07:52.658060   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.658118   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.658528   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.658547   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.658956   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.658971   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.659022   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.659273   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.659335   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.659473   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.659868   21482 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 17:07:52.659961   21482 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 17:07:52.661031   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 17:07:52.661050   21482 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 17:07:52.661075   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.661208   21482 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 17:07:52.661218   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 17:07:52.661232   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.662012   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I1028 17:07:52.662159   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I1028 17:07:52.662579   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.662651   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.662757   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I1028 17:07:52.663042   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.663292   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.663351   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.663400   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.663414   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.663421   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.663820   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.663926   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.664059   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.664184   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.664220   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.664266   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.664294   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.664737   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.664945   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.665066   21482 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 17:07:52.665149   21482 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 17:07:52.666265   21482 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:07:52.666289   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 17:07:52.666306   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.666368   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 17:07:52.666382   21482 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 17:07:52.666395   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.667805   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.668253   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.669223   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 17:07:52.669224   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 17:07:52.670127   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.670586   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.670609   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.670766   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.670819   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.670950   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.671099   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.671243   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.671266   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.671288   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.671407   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.671551   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.671687   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.671689   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.671802   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.671825   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.672243   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.672262   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.672408   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 17:07:52.672414   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:07:52.672615   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.672635   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.672891   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.672963   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.673034   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.673149   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.673205   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.673392   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.673420   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.673728   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.673983   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I1028 17:07:52.674408   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.674775   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.674788   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.674844   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1028 17:07:52.675110   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:07:52.675126   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.675171   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.675129   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 17:07:52.675351   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.675881   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.675901   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.676315   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.676573   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.676684   21482 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:07:52.676700   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 17:07:52.676715   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.677359   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.677644   21482 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:07:52.677667   21482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:07:52.677683   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.677981   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 17:07:52.678944   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.679452   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45537
	I1028 17:07:52.679804   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.680248   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 17:07:52.680372   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.680387   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.680448   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34101
	I1028 17:07:52.680248   21482 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 17:07:52.680756   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.680801   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:07:52.681062   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.681202   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:07:52.681213   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:07:52.681833   21482 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 17:07:52.681849   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 17:07:52.681866   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.682247   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:07:52.682292   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.682428   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.682447   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.682473   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:07:52.682517   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.682875   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.682944   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.683020   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.683178   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 17:07:52.683552   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.684349   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:07:52.685280   21482 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:07:52.685338   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.684765   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.686463   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 17:07:52.686518   21482 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 17:07:52.686621   21482 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:07:52.686633   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:07:52.686647   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.686777   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.686846   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.686861   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.686940   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.687075   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.687152   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.687287   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.687302   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.687709   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.687901   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.688010   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.688141   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.688962   21482 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 17:07:52.688973   21482 out.go:177]   - Using image docker.io/busybox:stable
	I1028 17:07:52.689244   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.689518   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.689542   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.689716   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.689887   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.689987   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.690160   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 17:07:52.690177   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 17:07:52.690184   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.690193   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.690192   21482 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:07:52.690237   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 17:07:52.690247   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:52.693427   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693455   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693722   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.693745   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693763   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:52.693781   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:52.693880   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.694051   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.694073   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:52.694191   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.694206   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:52.694323   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:07:52.694355   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:52.694469   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	W1028 17:07:52.695271   21482 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54016->192.168.39.15:22: read: connection reset by peer
	I1028 17:07:52.695290   21482 retry.go:31] will retry after 267.962113ms: ssh: handshake failed: read tcp 192.168.39.1:54016->192.168.39.15:22: read: connection reset by peer
	I1028 17:07:52.992291   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 17:07:53.031200   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 17:07:53.137377   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 17:07:53.137762   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:07:53.157537   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:07:53.162008   21482 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:07:53.162027   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 17:07:53.165301   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 17:07:53.165317   21482 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 17:07:53.177271   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 17:07:53.177287   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 17:07:53.192585   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 17:07:53.195467   21482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 17:07:53.195480   21482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 17:07:53.217704   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 17:07:53.217723   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 17:07:53.256907   21482 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 17:07:53.256931   21482 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 17:07:53.293602   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 17:07:53.333544   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 17:07:53.336198   21482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:07:53.336259   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:07:53.358378   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 17:07:53.358398   21482 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 17:07:53.413385   21482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 17:07:53.413407   21482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 17:07:53.419354   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 17:07:53.419374   21482 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 17:07:53.424510   21482 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:07:53.424527   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 17:07:53.451477   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 17:07:53.451497   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 17:07:53.502001   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 17:07:53.617900   21482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:07:53.617923   21482 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 17:07:53.640962   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 17:07:53.640986   21482 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 17:07:53.663527   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 17:07:53.692384   21482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 17:07:53.692415   21482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 17:07:53.708314   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 17:07:53.708338   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 17:07:53.887027   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 17:07:53.932149   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 17:07:53.932181   21482 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 17:07:54.003409   21482 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:07:54.003429   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 17:07:54.018125   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 17:07:54.018146   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 17:07:54.289059   21482 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:07:54.289079   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 17:07:54.320159   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 17:07:54.364024   21482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 17:07:54.364057   21482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 17:07:54.579938   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.587609067s)
	I1028 17:07:54.579985   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:54.579993   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:54.580296   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:54.580362   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:54.580374   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:54.580387   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:54.580396   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:54.580621   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:54.580637   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:54.655316   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 17:07:54.655339   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 17:07:54.723017   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:07:55.001460   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 17:07:55.001486   21482 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 17:07:55.279179   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 17:07:55.279202   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 17:07:55.608911   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 17:07:55.608932   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 17:07:55.927681   21482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:07:55.927723   21482 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 17:07:56.189657   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 17:07:57.000004   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.968767759s)
	I1028 17:07:57.000066   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.862285517s)
	I1028 17:07:57.000080   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000094   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000105   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000122   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000023   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.862613262s)
	I1028 17:07:57.000164   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000192   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000554   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000566   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000580   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000589   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000589   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000597   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000595   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000600   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000612   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000598   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000620   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000629   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.000639   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000675   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.000853   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000880   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000887   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000886   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.000898   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.000969   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.000993   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.001001   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.199431   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.199494   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.199805   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.199863   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.199880   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.575231   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.417659106s)
	I1028 17:07:57.575671   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.383039171s)
	I1028 17:07:57.575730   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.575752   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.575839   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.575904   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.576019   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.576035   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.576045   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.576051   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.576158   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.576432   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.576460   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.576491   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.578115   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:57.578119   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.578142   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:57.578152   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:57.578168   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:57.578398   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:57.578412   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:59.165330   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.871692113s)
	I1028 17:07:59.165393   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:59.165412   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:59.165761   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:59.165781   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:59.165789   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:07:59.165797   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:07:59.165797   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:59.166053   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:07:59.166072   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:07:59.166094   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:07:59.677575   21482 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 17:07:59.677610   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:07:59.680719   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:59.681101   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:07:59.681132   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:07:59.681315   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:07:59.681491   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:07:59.681591   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:07:59.681679   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:08:00.007636   21482 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 17:08:00.052774   21482 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.716549471s)
	I1028 17:08:00.052797   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.719216936s)
	I1028 17:08:00.052844   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.052856   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.052865   21482 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.716579374s)
	I1028 17:08:00.052891   21482 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 17:08:00.052973   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.550945492s)
	I1028 17:08:00.052998   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053011   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053102   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.3895371s)
	I1028 17:08:00.053133   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053152   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053256   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.053301   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053317   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053318   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.733133317s)
	I1028 17:08:00.053326   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053337   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053352   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053426   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.053440   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053445   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.053448   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053457   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053477   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.053489   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053497   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.053504   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.053947   21482 node_ready.go:35] waiting up to 6m0s for node "addons-186035" to be "Ready" ...
	I1028 17:08:00.054065   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054091   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054098   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.054159   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054183   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054208   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054216   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.054226   21482 addons.go:475] Verifying addon registry=true in "addons-186035"
	I1028 17:08:00.054236   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054247   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.054254   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.054261   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.054543   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.054576   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.054582   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.053270   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.166214019s)
	I1028 17:08:00.055239   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.055252   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.055350   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.055358   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.055368   21482 addons.go:475] Verifying addon ingress=true in "addons-186035"
	I1028 17:08:00.054207   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.056518   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.056533   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.056541   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.056548   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.056768   21482 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-186035 service yakd-dashboard -n yakd-dashboard
	
	I1028 17:08:00.056807   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.057174   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.057185   21482 addons.go:475] Verifying addon metrics-server=true in "addons-186035"
	I1028 17:08:00.056814   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.056877   21482 out.go:177] * Verifying registry addon...
	I1028 17:08:00.057591   21482 out.go:177] * Verifying ingress addon...
	I1028 17:08:00.058962   21482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 17:08:00.059980   21482 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 17:08:00.083213   21482 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 17:08:00.083243   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:00.084652   21482 node_ready.go:49] node "addons-186035" has status "Ready":"True"
	I1028 17:08:00.084671   21482 node_ready.go:38] duration metric: took 30.703689ms for node "addons-186035" to be "Ready" ...
	I1028 17:08:00.084678   21482 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:08:00.084682   21482 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 17:08:00.084697   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:00.093471   21482 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:00.164740   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:00.164763   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:00.165129   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:00.165151   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:00.165166   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:00.179805   21482 addons.go:234] Setting addon gcp-auth=true in "addons-186035"
	I1028 17:08:00.179849   21482 host.go:66] Checking if "addons-186035" exists ...
	I1028 17:08:00.180129   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:08:00.180162   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:08:00.194093   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34197
	I1028 17:08:00.194519   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:08:00.194982   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:08:00.195006   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:08:00.195323   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:08:00.195867   21482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:08:00.195916   21482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:08:00.209995   21482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I1028 17:08:00.210372   21482 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:08:00.210817   21482 main.go:141] libmachine: Using API Version  1
	I1028 17:08:00.210839   21482 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:08:00.211156   21482 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:08:00.211354   21482 main.go:141] libmachine: (addons-186035) Calling .GetState
	I1028 17:08:00.212829   21482 main.go:141] libmachine: (addons-186035) Calling .DriverName
	I1028 17:08:00.213065   21482 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 17:08:00.213091   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHHostname
	I1028 17:08:00.215442   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:08:00.215831   21482 main.go:141] libmachine: (addons-186035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:e8:0a", ip: ""} in network mk-addons-186035: {Iface:virbr1 ExpiryTime:2024-10-28 18:07:21 +0000 UTC Type:0 Mac:52:54:00:fd:e8:0a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-186035 Clientid:01:52:54:00:fd:e8:0a}
	I1028 17:08:00.215858   21482 main.go:141] libmachine: (addons-186035) DBG | domain addons-186035 has defined IP address 192.168.39.15 and MAC address 52:54:00:fd:e8:0a in network mk-addons-186035
	I1028 17:08:00.215988   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHPort
	I1028 17:08:00.216138   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHKeyPath
	I1028 17:08:00.216297   21482 main.go:141] libmachine: (addons-186035) Calling .GetSSHUsername
	I1028 17:08:00.216434   21482 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/addons-186035/id_rsa Username:docker}
	I1028 17:08:00.575318   21482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-186035" context rescaled to 1 replicas
	I1028 17:08:00.620405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:00.620572   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:00.920067   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.197002085s)
	W1028 17:08:00.920122   21482 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:00.920153   21482 retry.go:31] will retry after 343.96168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 17:08:01.067182   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:01.070157   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:01.264691   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 17:08:01.566332   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:01.573463   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:02.093689   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.903983014s)
	I1028 17:08:02.093741   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:02.093749   21482 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.880661358s)
	I1028 17:08:02.093756   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:02.094106   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:02.094119   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:02.094135   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:02.094149   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:02.094156   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:02.094376   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:02.094394   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:02.094403   21482 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-186035"
	I1028 17:08:02.094407   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:02.095210   21482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 17:08:02.096013   21482 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 17:08:02.097296   21482 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 17:08:02.098239   21482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 17:08:02.098308   21482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 17:08:02.098322   21482 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 17:08:02.144105   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:02.144276   21482 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 17:08:02.144300   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:02.144302   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:02.236186   21482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 17:08:02.236215   21482 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 17:08:02.266097   21482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:02.266124   21482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 17:08:02.287509   21482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 17:08:02.441509   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:02.565991   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:02.566335   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:02.612600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:03.065354   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:03.065644   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:03.104661   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:03.194724   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.929982887s)
	I1028 17:08:03.194779   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.194795   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.195072   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.195089   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.195098   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.195106   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.195108   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:03.195355   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.195367   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.195384   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:03.583470   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:03.587388   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:03.616534   21482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.328985963s)
	I1028 17:08:03.616588   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.616604   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.616852   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.616867   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.616875   21482 main.go:141] libmachine: Making call to close driver server
	I1028 17:08:03.616880   21482 main.go:141] libmachine: (addons-186035) Calling .Close
	I1028 17:08:03.617123   21482 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:08:03.617171   21482 main.go:141] libmachine: (addons-186035) DBG | Closing plugin on server side
	I1028 17:08:03.617173   21482 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:08:03.618143   21482 addons.go:475] Verifying addon gcp-auth=true in "addons-186035"
	I1028 17:08:03.619735   21482 out.go:177] * Verifying gcp-auth addon...
	I1028 17:08:03.621949   21482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 17:08:03.626938   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:03.675775   21482 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 17:08:03.675798   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:04.064260   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:04.064682   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:04.102419   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:04.125117   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:04.563428   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:04.564212   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:04.598220   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:04.664171   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:04.664832   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:05.064896   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:05.065096   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:05.102557   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:05.125922   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:05.563736   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:05.564613   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:05.664930   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:05.665237   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:06.064278   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:06.064758   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:06.102703   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:06.124760   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:06.563765   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:06.564119   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:06.600398   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:06.604018   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:06.625881   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:07.064772   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:07.065197   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:07.102774   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:07.126075   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:07.661495   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:07.661879   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:07.661984   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:07.665091   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:08.064122   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:08.065264   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:08.103705   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:08.125763   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:08.564521   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:08.564560   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:08.602430   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:08.625261   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:09.067396   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:09.067686   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:09.100320   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:09.103813   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:09.126344   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:09.562504   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:09.564313   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:09.601691   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:09.626078   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:10.062382   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:10.063869   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:10.102092   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:10.125907   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:10.563012   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:10.564566   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:10.602233   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:10.624341   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:11.069590   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:11.070169   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:11.170557   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:11.171415   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:11.564848   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:11.565964   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:11.600299   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:11.602859   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:11.624369   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:12.062912   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:12.064583   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:12.102933   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:12.125191   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:12.563946   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:12.564048   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:12.603455   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:12.625250   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:13.063891   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:13.064035   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:13.102773   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:13.125473   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:13.563486   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:13.564961   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:13.602486   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:13.625570   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:14.063649   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:14.063883   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:14.100041   21482 pod_ready.go:103] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:14.102518   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:14.128700   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:14.563404   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:14.564018   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:14.602465   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:14.625767   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:15.065245   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:15.065311   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:15.102854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:15.125274   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:15.562511   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:15.564831   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:15.603841   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:15.625672   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:16.062834   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.064057   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.101783   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:16.125696   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:16.566695   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:16.566806   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:16.599556   21482 pod_ready.go:93] pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.599579   21482 pod_ready.go:82] duration metric: took 16.50608757s for pod "amd-gpu-device-plugin-cmh8f" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.599593   21482 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.601650   21482 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9zldx" not found
	I1028 17:08:16.601667   21482 pod_ready.go:82] duration metric: took 2.068887ms for pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace to be "Ready" ...
	E1028 17:08:16.601676   21482 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-9zldx" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9zldx" not found
	I1028 17:08:16.601681   21482 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-znpww" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.603189   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:16.605560   21482 pod_ready.go:93] pod "coredns-7c65d6cfc9-znpww" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.605575   21482 pod_ready.go:82] duration metric: took 3.88807ms for pod "coredns-7c65d6cfc9-znpww" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.605585   21482 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.609080   21482 pod_ready.go:93] pod "etcd-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.609093   21482 pod_ready.go:82] duration metric: took 3.502025ms for pod "etcd-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.609103   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.613324   21482 pod_ready.go:93] pod "kube-apiserver-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.613341   21482 pod_ready.go:82] duration metric: took 4.230713ms for pod "kube-apiserver-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.613351   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.624015   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:16.798172   21482 pod_ready.go:93] pod "kube-controller-manager-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:16.798209   21482 pod_ready.go:82] duration metric: took 184.847708ms for pod "kube-controller-manager-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:16.798229   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qhnsh" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.064196   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.064776   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.103180   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:17.128989   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:17.197618   21482 pod_ready.go:93] pod "kube-proxy-qhnsh" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:17.197644   21482 pod_ready.go:82] duration metric: took 399.40754ms for pod "kube-proxy-qhnsh" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.197654   21482 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.565210   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:17.566634   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:17.597416   21482 pod_ready.go:93] pod "kube-scheduler-addons-186035" in "kube-system" namespace has status "Ready":"True"
	I1028 17:08:17.597437   21482 pod_ready.go:82] duration metric: took 399.777939ms for pod "kube-scheduler-addons-186035" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.597447   21482 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace to be "Ready" ...
	I1028 17:08:17.602789   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:17.624549   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:18.062940   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.064607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.103465   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.126027   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:18.562768   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:18.564516   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:18.602820   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:18.624331   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:19.064258   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.064492   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.103543   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.125509   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:19.564160   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:19.565280   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:19.602522   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:19.603068   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:19.625127   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.062833   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.063948   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.102888   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.124780   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:20.563443   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:20.563699   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:20.603494   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:20.624789   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.065493   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.065653   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.103119   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.128065   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:21.562189   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:21.564412   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:21.603308   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:21.603915   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:21.625069   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.063065   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.065223   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.103656   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.125112   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:22.562504   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:22.564672   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:22.604051   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:22.624596   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.062625   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.064113   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.103983   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.125153   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:23.563163   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:23.564607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:23.602600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:23.625373   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.062460   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.064582   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.103771   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.104019   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:24.126817   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:24.564462   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:24.564847   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:24.603100   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:24.626047   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.063259   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.065124   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.102410   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.125438   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:25.565176   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:25.565479   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:25.603320   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:25.625909   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.171781   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.172983   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:26.173976   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.174183   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.176671   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:26.564071   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:26.564277   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:26.603833   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:26.626562   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.067485   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.067876   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.103572   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.128885   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:27.561950   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:27.563969   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:27.602667   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:27.625144   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.062858   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.064176   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.106096   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.124951   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:28.563217   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:28.564837   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:28.601844   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:28.603204   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:28.625172   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.063482   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.064306   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.102405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.124537   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:29.563437   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:29.564683   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:29.602135   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:29.624611   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.063593   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.063779   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.102314   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.125159   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:30.562854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:30.564218   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:30.602854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:30.603435   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:30.625191   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.063723   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.063944   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.102134   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.125088   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:31.562681   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:31.563833   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:31.603068   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:31.627154   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.065865   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.066467   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.102535   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.124783   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:32.563963   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:32.564399   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:32.603461   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:32.604050   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:32.625015   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.064004   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.065335   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.102888   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.125122   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:33.563416   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:33.564694   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:33.603279   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:33.624705   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.063801   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.064956   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.102876   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.126256   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:34.562487   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:34.563716   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:34.605171   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:34.607869   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:34.625059   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.062852   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.063919   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.103499   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.124777   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:35.563895   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:35.564275   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:35.602678   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:35.625198   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.063276   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.064064   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.103062   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.124675   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:36.562422   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:36.564062   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:36.602847   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:36.625295   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.064627   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.064852   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.103125   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.104518   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:37.125058   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:37.564375   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:37.565160   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:37.603404   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:37.626894   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.063658   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.064168   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.103103   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.125226   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:38.564084   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:38.564520   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:38.602646   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:38.625027   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.063116   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.063580   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.103500   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.124482   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:39.563044   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:39.564395   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:39.603498   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:39.604310   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:39.624947   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.062843   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.064334   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.102653   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.124686   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:40.563742   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:40.564134   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:40.604408   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:40.625456   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.068721   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.069429   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.102783   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.125380   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:41.562630   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:41.564920   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:41.602500   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:41.625234   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.062592   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.065067   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.104016   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.106013   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:42.125814   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:42.565298   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:42.565726   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:42.603074   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:42.624726   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.063491   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.064282   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.102902   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.125357   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:43.563020   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:43.564672   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:43.602827   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:43.624099   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.064326   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.064688   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.102330   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:44.124818   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.566453   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:44.566912   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:44.612951   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:44.664103   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:44.665046   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.064204   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.064631   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.106426   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.125971   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:45.568424   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:45.568661   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:45.602848   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:45.625307   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.063740   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.063931   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.102741   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.124749   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:46.562312   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:46.563537   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:46.602662   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:46.625147   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.065769   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.065855   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.103028   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.104208   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:47.124318   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:47.562746   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:47.564203   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:47.604512   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:47.625092   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.063680   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.064944   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.104316   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.125312   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:48.563820   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:48.564025   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:48.603321   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:48.625198   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.062576   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.064907   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.102375   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.125410   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:49.562911   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:49.564373   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:49.603606   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:49.603884   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:49.625163   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.062507   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.066315   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.103288   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.125609   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:50.564862   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:50.565760   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:50.603686   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:50.625099   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.062772   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.065474   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.102577   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.124458   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:51.563841   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:51.564700   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:51.603425   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:51.608428   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:51.625419   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.064155   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.066107   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.103067   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.124798   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:52.563456   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 17:08:52.563906   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:52.602486   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:52.624416   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.064017   21482 kapi.go:107] duration metric: took 53.005051602s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 17:08:53.064297   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.166752   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:53.167057   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.565359   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:53.602659   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:53.624519   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.065167   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.102171   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.103112   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:54.125455   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:54.564446   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:54.603064   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:54.624817   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.063505   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.103977   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.125828   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:55.564998   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:55.603207   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:55.624869   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.064519   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.104275   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.105011   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:56.125608   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:56.564013   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:56.603433   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:56.624935   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.064542   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.105512   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.125374   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:57.564551   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:57.603823   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:57.626266   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.064428   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.105379   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.125279   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.674900   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:58.675385   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:58.676227   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:58.681390   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:08:59.064734   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.165692   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.166398   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:08:59.564328   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:08:59.602770   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:08:59.624640   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.067722   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.103547   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.124864   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:00.563959   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:00.602630   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:00.625664   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.064359   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.102826   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.104081   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:01.126003   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:01.565047   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:01.602886   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:01.625177   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.064597   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.103816   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:02.124361   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.564516   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:02.665311   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:02.666706   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.064485   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.105612   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.165272   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:03.567575   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:03.602983   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:03.604029   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:03.632040   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.201093   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:04.201600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:04.201837   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.565454   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:04.603309   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:04.625382   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.064321   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.102747   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.125390   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:05.565007   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:05.603585   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:05.606685   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:05.625093   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.064802   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.103985   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:06.125246   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:06.564317   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:06.602761   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:06.625333   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.064258   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.102529   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.125876   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:07.565768   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:07.606715   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:07.606974   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:07.625493   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.065197   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.104904   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.125323   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:08.564672   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:08.602583   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:08.624763   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.064709   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.103553   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.124314   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:09.566874   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:09.603230   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:09.610743   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:09.625340   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.066153   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.106244   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.127304   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:10.565383   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:10.604111   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:10.624975   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.063691   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.114424   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.124847   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:11.564556   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:11.603914   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:11.624509   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.064736   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.109924   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:12.113055   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.132971   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:12.566596   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:12.609994   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:12.667708   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.066322   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.103070   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:13.124846   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:13.564215   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:13.603250   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:13.625165   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.513075   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:14.514015   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.514249   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.517130   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:14.606252   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:14.609611   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:14.628253   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.065758   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.108804   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:15.125056   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:15.564340   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:15.602338   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:15.625587   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.064457   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.165521   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:16.165774   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.565300   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:16.604461   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:16.605819   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:16.625276   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.064359   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.103306   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.125132   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:17.564072   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:17.602636   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:17.626746   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.063891   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.102802   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.127424   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:18.566543   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:18.604696   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:18.606075   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:18.625852   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.064666   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.115045   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.126323   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:19.564401   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:19.602781   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:19.664784   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.064602   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.102595   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.125193   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:20.563960   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:20.622324   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:20.629214   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:20.629256   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.064590   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.103658   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.130089   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:21.564317   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:21.602944   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:21.625303   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.063593   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.103333   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:22.124448   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.568215   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:22.668871   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:22.669590   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.064335   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.102567   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.103667   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:23.124839   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:23.564031   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:23.602509   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:23.624568   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.064687   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:24.106654   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:24.125169   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:24.564699   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:24.602905   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:24.625340   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.063661   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:25.103411   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.104365   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:25.125091   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:25.564194   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:25.602495   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:25.627520   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.064962   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:26.102328   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 17:09:26.125276   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:26.565169   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:26.606991   21482 kapi.go:107] duration metric: took 1m24.508752784s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 17:09:26.624305   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.064603   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:27.125378   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:27.564840   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:27.603847   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:27.624297   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.064896   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:28.124661   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:28.565400   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:28.625595   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.065139   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:29.125712   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:29.565516   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:29.625110   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.064476   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:30.102674   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:30.124935   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:30.563580   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:30.625405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:31.064458   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:31.124789   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:31.564434   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:31.624421   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:32.064417   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:32.103759   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:32.125125   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:32.565658   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:32.624670   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:33.063600   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:33.124672   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:33.564752   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:33.624729   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:34.065035   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:34.124931   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:34.563808   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:34.603818   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:34.625648   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:35.063961   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:35.125693   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:35.565449   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:35.625415   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:36.064875   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:36.124910   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:36.564599   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:36.604280   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:36.624988   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:37.064127   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:37.125642   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:37.565355   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:37.625600   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:38.063462   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:38.124837   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:38.564215   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:38.624499   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:39.064659   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:39.103618   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:39.125502   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:39.565245   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:39.625807   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:40.064244   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:40.126555   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:40.565084   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:40.624526   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:41.064837   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:41.125639   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:41.564331   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:41.603403   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:41.624851   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:42.064634   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:42.125137   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:42.564648   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:42.624616   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:43.063925   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:43.125265   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:43.564637   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:43.603602   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:43.625078   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:44.063979   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:44.125539   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:44.564913   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:44.625937   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:45.066687   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:45.125353   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:45.571313   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:45.607054   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:45.625638   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:46.064242   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:46.124446   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:46.564558   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:46.625975   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:47.064144   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:47.124927   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:47.564681   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:47.625930   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:48.064835   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:48.103871   21482 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"False"
	I1028 17:09:48.125560   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:48.565566   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:48.611661   21482 pod_ready.go:93] pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:48.611687   21482 pod_ready.go:82] duration metric: took 1m31.014233038s for pod "metrics-server-84c5f94fbc-6vwqq" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:48.611698   21482 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rtk85" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:48.622310   21482 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rtk85" in "kube-system" namespace has status "Ready":"True"
	I1028 17:09:48.622330   21482 pod_ready.go:82] duration metric: took 10.624805ms for pod "nvidia-device-plugin-daemonset-rtk85" in "kube-system" namespace to be "Ready" ...
	I1028 17:09:48.622346   21482 pod_ready.go:39] duration metric: took 1m48.53765719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:09:48.622366   21482 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:09:48.622398   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:09:48.622443   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:09:48.665411   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:48.681068   21482 cri.go:89] found id: "deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:09:48.681087   21482 cri.go:89] found id: ""
	I1028 17:09:48.681103   21482 logs.go:282] 1 containers: [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231]
	I1028 17:09:48.681146   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.689469   21482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:09:48.689523   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:09:48.732164   21482 cri.go:89] found id: "c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:09:48.732181   21482 cri.go:89] found id: ""
	I1028 17:09:48.732188   21482 logs.go:282] 1 containers: [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7]
	I1028 17:09:48.732231   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.736269   21482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:09:48.736325   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:09:48.771595   21482 cri.go:89] found id: "614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:09:48.771619   21482 cri.go:89] found id: ""
	I1028 17:09:48.771626   21482 logs.go:282] 1 containers: [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d]
	I1028 17:09:48.771669   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.775879   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:09:48.775927   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:09:48.813607   21482 cri.go:89] found id: "2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:09:48.813635   21482 cri.go:89] found id: ""
	I1028 17:09:48.813645   21482 logs.go:282] 1 containers: [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0]
	I1028 17:09:48.813691   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.818152   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:09:48.818202   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:09:48.854915   21482 cri.go:89] found id: "2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:09:48.854934   21482 cri.go:89] found id: ""
	I1028 17:09:48.854941   21482 logs.go:282] 1 containers: [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924]
	I1028 17:09:48.854978   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.859147   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:09:48.859206   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:09:48.900971   21482 cri.go:89] found id: "d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:09:48.900993   21482 cri.go:89] found id: ""
	I1028 17:09:48.901000   21482 logs.go:282] 1 containers: [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951]
	I1028 17:09:48.901045   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:09:48.905230   21482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:09:48.905300   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:09:48.949084   21482 cri.go:89] found id: ""
	I1028 17:09:48.949106   21482 logs.go:282] 0 containers: []
	W1028 17:09:48.949113   21482 logs.go:284] No container was found matching "kindnet"
	I1028 17:09:48.949121   21482 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:09:48.949136   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:09:49.064804   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:49.086928   21482 logs.go:123] Gathering logs for kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] ...
	I1028 17:09:49.086950   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:09:49.126078   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:49.133053   21482 logs.go:123] Gathering logs for kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] ...
	I1028 17:09:49.133073   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:09:49.176844   21482 logs.go:123] Gathering logs for kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] ...
	I1028 17:09:49.176869   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:09:49.214094   21482 logs.go:123] Gathering logs for kubelet ...
	I1028 17:09:49.214117   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:09:49.267628   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:49.267806   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:09:49.267926   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:49.268072   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:09:49.306933   21482 logs.go:123] Gathering logs for etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] ...
	I1028 17:09:49.306970   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:09:49.373869   21482 logs.go:123] Gathering logs for coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] ...
	I1028 17:09:49.373895   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:09:49.415480   21482 logs.go:123] Gathering logs for kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] ...
	I1028 17:09:49.415507   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:09:49.478999   21482 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:09:49.479027   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:09:49.568173   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:49.625453   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:50.064764   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:50.125255   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:50.512731   21482 logs.go:123] Gathering logs for container status ...
	I1028 17:09:50.512775   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:09:50.565714   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:50.577132   21482 logs.go:123] Gathering logs for dmesg ...
	I1028 17:09:50.577157   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:09:50.601252   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:09:50.601278   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:09:50.601334   21482 out.go:270] X Problems detected in kubelet:
	W1028 17:09:50.601355   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:50.601363   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:09:50.601375   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:09:50.601390   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:09:50.601396   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:09:50.601406   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:09:50.626350   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:51.064650   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:51.126549   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:51.564437   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:51.625395   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:52.075607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:52.125778   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:52.565329   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:52.626204   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:53.065344   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:53.125803   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:53.565561   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:53.625727   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:54.064746   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:54.125120   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:54.564058   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:54.625455   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:55.064192   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:55.125572   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:55.565216   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:55.625501   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:56.066055   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:56.125506   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:56.565102   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:56.628873   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:57.065255   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:57.125877   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:57.565594   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:57.625149   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:58.064763   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:58.125969   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:58.563979   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:58.625194   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:59.064312   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:59.125712   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:09:59.565493   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:09:59.626447   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:00.064669   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:00.126469   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:00.564110   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:00.602569   21482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:10:00.622556   21482 api_server.go:72] duration metric: took 2m8.097343833s to wait for apiserver process to appear ...
	I1028 17:10:00.622579   21482 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:10:00.622613   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:00.622673   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:00.625854   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:00.661753   21482 cri.go:89] found id: "deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:00.661769   21482 cri.go:89] found id: ""
	I1028 17:10:00.661778   21482 logs.go:282] 1 containers: [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231]
	I1028 17:10:00.661835   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.668326   21482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:00.668383   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:00.713173   21482 cri.go:89] found id: "c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:00.713199   21482 cri.go:89] found id: ""
	I1028 17:10:00.713206   21482 logs.go:282] 1 containers: [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7]
	I1028 17:10:00.713262   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.717355   21482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:00.717404   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:00.756433   21482 cri.go:89] found id: "614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:00.756460   21482 cri.go:89] found id: ""
	I1028 17:10:00.756483   21482 logs.go:282] 1 containers: [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d]
	I1028 17:10:00.756539   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.760590   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:00.760650   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:00.809191   21482 cri.go:89] found id: "2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:00.809220   21482 cri.go:89] found id: ""
	I1028 17:10:00.809230   21482 logs.go:282] 1 containers: [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0]
	I1028 17:10:00.809282   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.813254   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:00.813307   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:00.854158   21482 cri.go:89] found id: "2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:00.854177   21482 cri.go:89] found id: ""
	I1028 17:10:00.854183   21482 logs.go:282] 1 containers: [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924]
	I1028 17:10:00.854224   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.858277   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:00.858326   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:00.895417   21482 cri.go:89] found id: "d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:00.895437   21482 cri.go:89] found id: ""
	I1028 17:10:00.895445   21482 logs.go:282] 1 containers: [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951]
	I1028 17:10:00.895495   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:00.899458   21482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:00.899508   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:00.935040   21482 cri.go:89] found id: ""
	I1028 17:10:00.935063   21482 logs.go:282] 0 containers: []
	W1028 17:10:00.935071   21482 logs.go:284] No container was found matching "kindnet"
	I1028 17:10:00.935086   21482 logs.go:123] Gathering logs for kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] ...
	I1028 17:10:00.935097   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:00.986889   21482 logs.go:123] Gathering logs for etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] ...
	I1028 17:10:00.986917   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:01.050984   21482 logs.go:123] Gathering logs for coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] ...
	I1028 17:10:01.051027   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:01.064147   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:01.093641   21482 logs.go:123] Gathering logs for kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] ...
	I1028 17:10:01.093675   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:01.125585   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:01.141526   21482 logs.go:123] Gathering logs for kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] ...
	I1028 17:10:01.141549   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:01.178206   21482 logs.go:123] Gathering logs for kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] ...
	I1028 17:10:01.178228   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:01.236198   21482 logs.go:123] Gathering logs for container status ...
	I1028 17:10:01.236228   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:01.294101   21482 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:01.294130   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:01.308338   21482 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:01.308362   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:01.419465   21482 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:01.419494   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:01.565583   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:01.626153   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:02.065566   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:02.126712   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:02.346895   21482 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:02.346934   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 17:10:02.405044   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.405265   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:02.405431   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.405666   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:02.439725   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:02.439749   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:02.439807   21482 out.go:270] X Problems detected in kubelet:
	W1028 17:10:02.439819   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.439826   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:02.439836   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:02.439841   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:02.439846   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:02.439852   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:02.564714   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:02.625644   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:03.064212   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:03.125405   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:03.565108   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:03.625244   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:04.064095   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:04.125481   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:04.564664   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:04.624961   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:05.064130   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:05.125671   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:05.564916   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:05.626114   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:06.064607   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:06.125984   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:06.564395   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:06.625715   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:07.064818   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:07.125205   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:07.565116   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:07.625982   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:08.064503   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:08.125916   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:08.564008   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:08.625319   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:09.064270   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:09.127687   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:09.565475   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:09.625592   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:10.064655   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:10.124560   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:10.564806   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:10.624951   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:11.064769   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:11.124909   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:11.564208   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:11.625454   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:12.064643   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:12.125169   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:12.441318   21482 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1028 17:10:12.446440   21482 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1028 17:10:12.447413   21482 api_server.go:141] control plane version: v1.31.2
	I1028 17:10:12.447435   21482 api_server.go:131] duration metric: took 11.82484834s to wait for apiserver health ...
	I1028 17:10:12.447444   21482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:10:12.447468   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 17:10:12.447520   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 17:10:12.486393   21482 cri.go:89] found id: "deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:12.486419   21482 cri.go:89] found id: ""
	I1028 17:10:12.486428   21482 logs.go:282] 1 containers: [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231]
	I1028 17:10:12.486489   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.490768   21482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 17:10:12.490833   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 17:10:12.530655   21482 cri.go:89] found id: "c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:12.530675   21482 cri.go:89] found id: ""
	I1028 17:10:12.530684   21482 logs.go:282] 1 containers: [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7]
	I1028 17:10:12.530738   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.534929   21482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 17:10:12.534985   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 17:10:12.565431   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:12.594370   21482 cri.go:89] found id: "614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:12.594396   21482 cri.go:89] found id: ""
	I1028 17:10:12.594406   21482 logs.go:282] 1 containers: [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d]
	I1028 17:10:12.594457   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.600281   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 17:10:12.600346   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 17:10:12.626070   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:12.640069   21482 cri.go:89] found id: "2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:12.640089   21482 cri.go:89] found id: ""
	I1028 17:10:12.640096   21482 logs.go:282] 1 containers: [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0]
	I1028 17:10:12.640145   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.644085   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 17:10:12.644120   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 17:10:12.683856   21482 cri.go:89] found id: "2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:12.683872   21482 cri.go:89] found id: ""
	I1028 17:10:12.683879   21482 logs.go:282] 1 containers: [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924]
	I1028 17:10:12.683927   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.688035   21482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 17:10:12.688100   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 17:10:12.725241   21482 cri.go:89] found id: "d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:12.725259   21482 cri.go:89] found id: ""
	I1028 17:10:12.725266   21482 logs.go:282] 1 containers: [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951]
	I1028 17:10:12.725311   21482 ssh_runner.go:195] Run: which crictl
	I1028 17:10:12.729385   21482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 17:10:12.729451   21482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 17:10:12.779592   21482 cri.go:89] found id: ""
	I1028 17:10:12.779620   21482 logs.go:282] 0 containers: []
	W1028 17:10:12.779630   21482 logs.go:284] No container was found matching "kindnet"
	I1028 17:10:12.779640   21482 logs.go:123] Gathering logs for dmesg ...
	I1028 17:10:12.779655   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 17:10:12.796430   21482 logs.go:123] Gathering logs for describe nodes ...
	I1028 17:10:12.796453   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 17:10:12.907992   21482 logs.go:123] Gathering logs for kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] ...
	I1028 17:10:12.908024   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231"
	I1028 17:10:12.960227   21482 logs.go:123] Gathering logs for coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] ...
	I1028 17:10:12.960252   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d"
	I1028 17:10:12.998312   21482 logs.go:123] Gathering logs for kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] ...
	I1028 17:10:12.998340   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0"
	I1028 17:10:13.052115   21482 logs.go:123] Gathering logs for kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] ...
	I1028 17:10:13.052143   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951"
	I1028 17:10:13.064342   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:13.111093   21482 logs.go:123] Gathering logs for CRI-O ...
	I1028 17:10:13.111119   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 17:10:13.126771   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:13.565186   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:13.625150   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:14.036892   21482 logs.go:123] Gathering logs for kubelet ...
	I1028 17:10:14.036932   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 17:10:14.063799   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1028 17:10:14.110874   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.111053   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:14.111177   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.111327   21482 logs.go:138] Found kubelet problem: Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:14.125976   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:14.146814   21482 logs.go:123] Gathering logs for kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] ...
	I1028 17:10:14.146839   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924"
	I1028 17:10:14.191625   21482 logs.go:123] Gathering logs for container status ...
	I1028 17:10:14.191650   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 17:10:14.303274   21482 logs.go:123] Gathering logs for etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] ...
	I1028 17:10:14.303319   21482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7"
	I1028 17:10:14.384065   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:14.384094   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 17:10:14.384145   21482 out.go:270] X Problems detected in kubelet:
	W1028 17:10:14.384157   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517316    1202 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-186035" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.384162   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517361    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	W1028 17:10:14.384169   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: W1028 17:07:55.517802    1202 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-186035" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-186035' and this object
	W1028 17:10:14.384176   21482 out.go:270]   Oct 28 17:07:55 addons-186035 kubelet[1202]: E1028 17:07:55.517823    1202 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-186035\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-186035' and this object" logger="UnhandledError"
	I1028 17:10:14.384181   21482 out.go:358] Setting ErrFile to fd 2...
	I1028 17:10:14.384185   21482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:10:14.564496   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:14.625988   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:15.063783   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:15.124703   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:15.569363   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:15.626057   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:16.067784   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:16.125792   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:16.563851   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:16.624740   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:17.064741   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:17.124946   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:17.563814   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:17.625743   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:18.317822   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:18.318723   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:18.564050   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:18.625852   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:19.064866   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:19.125591   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:19.564135   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:19.625369   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:20.064107   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:20.125577   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:20.564582   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:20.626499   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:21.065283   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:21.165262   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:21.565269   21482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 17:10:21.625513   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:22.065266   21482 kapi.go:107] duration metric: took 2m22.005281338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 17:10:22.125670   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:22.626265   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:23.126980   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:23.625719   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:24.125180   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:24.393674   21482 system_pods.go:59] 18 kube-system pods found
	I1028 17:10:24.393704   21482 system_pods.go:61] "amd-gpu-device-plugin-cmh8f" [5e0752de-e01b-4c91-989c-235728654d63] Running
	I1028 17:10:24.393709   21482 system_pods.go:61] "coredns-7c65d6cfc9-znpww" [5d9f893c-87ee-4a07-8ca0-7fed06690855] Running
	I1028 17:10:24.393714   21482 system_pods.go:61] "csi-hostpath-attacher-0" [ae387c41-3c73-426e-8a23-9836bb70b04c] Running
	I1028 17:10:24.393718   21482 system_pods.go:61] "csi-hostpath-resizer-0" [4f427e09-9338-4c6c-9187-448f71011f7d] Running
	I1028 17:10:24.393721   21482 system_pods.go:61] "csi-hostpathplugin-bj7bv" [ac75459b-cd05-42f9-9cdb-a2a16e61251d] Running
	I1028 17:10:24.393724   21482 system_pods.go:61] "etcd-addons-186035" [7759663a-5012-4639-889f-de52909f8a06] Running
	I1028 17:10:24.393727   21482 system_pods.go:61] "kube-apiserver-addons-186035" [42a946b2-0ce0-490f-8279-657d7f0f8172] Running
	I1028 17:10:24.393731   21482 system_pods.go:61] "kube-controller-manager-addons-186035" [175b2784-a103-4f52-8d45-137cf16ab3d0] Running
	I1028 17:10:24.393734   21482 system_pods.go:61] "kube-ingress-dns-minikube" [9018f101-e082-4dea-bf69-3e8a31a66ae8] Running
	I1028 17:10:24.393738   21482 system_pods.go:61] "kube-proxy-qhnsh" [a82fd776-0217-40e3-a973-146eb6cb0c5a] Running
	I1028 17:10:24.393740   21482 system_pods.go:61] "kube-scheduler-addons-186035" [6aced9ea-3f64-41a1-bbb0-f3fda6396aa7] Running
	I1028 17:10:24.393743   21482 system_pods.go:61] "metrics-server-84c5f94fbc-6vwqq" [2a6e6b1d-eaec-41b1-96c8-a3b0444088ec] Running
	I1028 17:10:24.393747   21482 system_pods.go:61] "nvidia-device-plugin-daemonset-rtk85" [cf1f792a-317b-462d-bd89-3d40fc15ae2e] Running
	I1028 17:10:24.393752   21482 system_pods.go:61] "registry-66c9cd494c-zzlqq" [b84d4f13-3ad1-4d7c-81fc-5def543dae51] Running
	I1028 17:10:24.393759   21482 system_pods.go:61] "registry-proxy-7nj9m" [783bc207-34a0-49f6-a31b-d358ca0aa6e3] Running
	I1028 17:10:24.393764   21482 system_pods.go:61] "snapshot-controller-56fcc65765-p7p8n" [2c816687-c0da-413a-a2e6-7491aad1e60b] Running
	I1028 17:10:24.393769   21482 system_pods.go:61] "snapshot-controller-56fcc65765-rm96g" [82f57471-8403-417f-be39-44be24e4b5cf] Running
	I1028 17:10:24.393776   21482 system_pods.go:61] "storage-provisioner" [c8b798cc-678e-4c24-9e8e-d8e87d5b7be4] Running
	I1028 17:10:24.393783   21482 system_pods.go:74] duration metric: took 11.946333127s to wait for pod list to return data ...
	I1028 17:10:24.393797   21482 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:10:24.396200   21482 default_sa.go:45] found service account: "default"
	I1028 17:10:24.396215   21482 default_sa.go:55] duration metric: took 2.413648ms for default service account to be created ...
	I1028 17:10:24.396222   21482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:10:24.404331   21482 system_pods.go:86] 18 kube-system pods found
	I1028 17:10:24.404351   21482 system_pods.go:89] "amd-gpu-device-plugin-cmh8f" [5e0752de-e01b-4c91-989c-235728654d63] Running
	I1028 17:10:24.404356   21482 system_pods.go:89] "coredns-7c65d6cfc9-znpww" [5d9f893c-87ee-4a07-8ca0-7fed06690855] Running
	I1028 17:10:24.404361   21482 system_pods.go:89] "csi-hostpath-attacher-0" [ae387c41-3c73-426e-8a23-9836bb70b04c] Running
	I1028 17:10:24.404364   21482 system_pods.go:89] "csi-hostpath-resizer-0" [4f427e09-9338-4c6c-9187-448f71011f7d] Running
	I1028 17:10:24.404367   21482 system_pods.go:89] "csi-hostpathplugin-bj7bv" [ac75459b-cd05-42f9-9cdb-a2a16e61251d] Running
	I1028 17:10:24.404370   21482 system_pods.go:89] "etcd-addons-186035" [7759663a-5012-4639-889f-de52909f8a06] Running
	I1028 17:10:24.404374   21482 system_pods.go:89] "kube-apiserver-addons-186035" [42a946b2-0ce0-490f-8279-657d7f0f8172] Running
	I1028 17:10:24.404377   21482 system_pods.go:89] "kube-controller-manager-addons-186035" [175b2784-a103-4f52-8d45-137cf16ab3d0] Running
	I1028 17:10:24.404388   21482 system_pods.go:89] "kube-ingress-dns-minikube" [9018f101-e082-4dea-bf69-3e8a31a66ae8] Running
	I1028 17:10:24.404396   21482 system_pods.go:89] "kube-proxy-qhnsh" [a82fd776-0217-40e3-a973-146eb6cb0c5a] Running
	I1028 17:10:24.404399   21482 system_pods.go:89] "kube-scheduler-addons-186035" [6aced9ea-3f64-41a1-bbb0-f3fda6396aa7] Running
	I1028 17:10:24.404402   21482 system_pods.go:89] "metrics-server-84c5f94fbc-6vwqq" [2a6e6b1d-eaec-41b1-96c8-a3b0444088ec] Running
	I1028 17:10:24.404406   21482 system_pods.go:89] "nvidia-device-plugin-daemonset-rtk85" [cf1f792a-317b-462d-bd89-3d40fc15ae2e] Running
	I1028 17:10:24.404409   21482 system_pods.go:89] "registry-66c9cd494c-zzlqq" [b84d4f13-3ad1-4d7c-81fc-5def543dae51] Running
	I1028 17:10:24.404412   21482 system_pods.go:89] "registry-proxy-7nj9m" [783bc207-34a0-49f6-a31b-d358ca0aa6e3] Running
	I1028 17:10:24.404415   21482 system_pods.go:89] "snapshot-controller-56fcc65765-p7p8n" [2c816687-c0da-413a-a2e6-7491aad1e60b] Running
	I1028 17:10:24.404419   21482 system_pods.go:89] "snapshot-controller-56fcc65765-rm96g" [82f57471-8403-417f-be39-44be24e4b5cf] Running
	I1028 17:10:24.404423   21482 system_pods.go:89] "storage-provisioner" [c8b798cc-678e-4c24-9e8e-d8e87d5b7be4] Running
	I1028 17:10:24.404429   21482 system_pods.go:126] duration metric: took 8.203232ms to wait for k8s-apps to be running ...
	I1028 17:10:24.404437   21482 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:10:24.404488   21482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:10:24.424164   21482 system_svc.go:56] duration metric: took 19.720749ms WaitForService to wait for kubelet
	I1028 17:10:24.424183   21482 kubeadm.go:582] duration metric: took 2m31.898978217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:10:24.424199   21482 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:10:24.427142   21482 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:10:24.427164   21482 node_conditions.go:123] node cpu capacity is 2
	I1028 17:10:24.427176   21482 node_conditions.go:105] duration metric: took 2.971407ms to run NodePressure ...
	I1028 17:10:24.427187   21482 start.go:241] waiting for startup goroutines ...
	I1028 17:10:24.625336   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:25.125716   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:25.626072   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:26.126525   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:26.625021   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:27.125392   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:27.626372   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:28.126438   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:28.626186   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:29.125441   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:29.626176   21482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 17:10:30.125507   21482 kapi.go:107] duration metric: took 2m26.503556416s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 17:10:30.127026   21482 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-186035 cluster.
	I1028 17:10:30.128208   21482 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 17:10:30.129290   21482 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 17:10:30.130406   21482 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1028 17:10:30.131442   21482 addons.go:510] duration metric: took 2m37.606202627s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner default-storageclass amd-gpu-device-plugin storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1028 17:10:30.131478   21482 start.go:246] waiting for cluster config update ...
	I1028 17:10:30.131496   21482 start.go:255] writing updated cluster config ...
	I1028 17:10:30.131714   21482 ssh_runner.go:195] Run: rm -f paused
	I1028 17:10:30.182008   21482 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:10:30.183504   21482 out.go:177] * Done! kubectl is now configured to use "addons-186035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.606611007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135819606586750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02b60938-e68c-4801-9867-2ce122d0c64f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.607128816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b695a2e6-8129-49a9-8a5d-b6b8078ba756 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.607184197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b695a2e6-8129-49a9-8a5d-b6b8078ba756 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.607486816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0058fa3b8e4f62e784a42027f1b859c5ecb8b0bbb709769ef7167c47dfc6ed1,PodSandboxId:224dc2718dae7ce64226378a0d82107276939848b474efed0b4d537f111ed379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730135620589929344,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jklgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4b78547-1aed-4e78-9a66-db282c1161d5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9
b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c28978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd092012f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b695a2e6-8129-49a9-8a5d-b6b8078ba756 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.642452803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22f7b4d4-46e4-4178-9d53-6fa7805480b8 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.642522532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22f7b4d4-46e4-4178-9d53-6fa7805480b8 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.643948064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f9a441c-adc5-4220-bb95-b20ae67a509b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.645340504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135819645315973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f9a441c-adc5-4220-bb95-b20ae67a509b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.645928788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dfe972f-ca75-4b51-81ec-dfa5b5259ea3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.645980216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dfe972f-ca75-4b51-81ec-dfa5b5259ea3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.646262138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0058fa3b8e4f62e784a42027f1b859c5ecb8b0bbb709769ef7167c47dfc6ed1,PodSandboxId:224dc2718dae7ce64226378a0d82107276939848b474efed0b4d537f111ed379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730135620589929344,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jklgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4b78547-1aed-4e78-9a66-db282c1161d5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9
b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c28978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd092012f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dfe972f-ca75-4b51-81ec-dfa5b5259ea3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.685565208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ecc0f3c-b600-4c17-bd41-8de274205bb5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.685653387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ecc0f3c-b600-4c17-bd41-8de274205bb5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.686951439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7b07f8d-fd95-4b97-b29b-b67989db3fa4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.688443511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135819688361478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7b07f8d-fd95-4b97-b29b-b67989db3fa4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.688999540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed0dcf50-d309-4486-88dd-2529fcca0377 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.689065528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed0dcf50-d309-4486-88dd-2529fcca0377 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.689330024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0058fa3b8e4f62e784a42027f1b859c5ecb8b0bbb709769ef7167c47dfc6ed1,PodSandboxId:224dc2718dae7ce64226378a0d82107276939848b474efed0b4d537f111ed379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730135620589929344,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jklgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4b78547-1aed-4e78-9a66-db282c1161d5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9
b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c28978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd092012f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed0dcf50-d309-4486-88dd-2529fcca0377 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.721264129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b94ae0d2-f69f-4d15-a114-da8d952228b5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.721346797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b94ae0d2-f69f-4d15-a114-da8d952228b5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.724239701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2956e33c-f54c-4921-b0c1-51ba493b7574 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.725457535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135819725371233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2956e33c-f54c-4921-b0c1-51ba493b7574 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.726138854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d948e16-c252-48bb-98df-1b75d11c062a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.726195653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d948e16-c252-48bb-98df-1b75d11c062a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:16:59 addons-186035 crio[659]: time="2024-10-28 17:16:59.726632700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0058fa3b8e4f62e784a42027f1b859c5ecb8b0bbb709769ef7167c47dfc6ed1,PodSandboxId:224dc2718dae7ce64226378a0d82107276939848b474efed0b4d537f111ed379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730135620589929344,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jklgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4b78547-1aed-4e78-9a66-db282c1161d5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c42d8554f886293de05a8b064065f418d9ddc98cbd1d121a5c5a6d9203de1b9,PodSandboxId:33e247f619c96902ebaea5178c532c1db17138ed5d0d8180079e38adbfb0ffc6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730135477294989190,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b40f41cd-78f5-4945-99b4-5630913ebfca,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd763de10a43d6342eadc82bb0add35915e649b7f4a292291ad822030753935,PodSandboxId:5756284ab9ccf8f96e00fc291a7de8af6891588057ce092824fdc40e9c0b4d54,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730135439837993581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5783d2c6-cf3e-4775-9
b0d-19fc4b151df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc2426fe37597b894be63f252c9ecff848ebaa56758c7d182ed349b39cd9552,PodSandboxId:848fd1506d89771125a8903ce093963934859efd5ed29db197b2a1ed7d196ed6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730135324505904275,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-6vwqq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2a6e6b1d-eaec-41b1-96c8-a3b0444088ec,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0ea2ebc9079944fd97cd677d483a4f7095138f11fbab8f95fdfb8e137ea261,PodSandboxId:aa9c667b13f3ac524b15bc6cde0b56dc9d488f0bc329ea7101456d3c28978bc5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730135295598077430,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cmh8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e0752de-e01b-4c91-989c-235728654d63,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6,PodSandboxId:08f45bd79f0681e43b4ddd7382c2564206c06f09f3577cfb4d84f62be403fdb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730135279646952170,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b798cc-678e-4c24-9e8e-d8e87d5b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d,PodSandboxId:5c0ea02eda904a3a790489efd092012f72b836b65b2c0ca2e3a8d1f5743ff940,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730135275369065490,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-znpww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9f893c-87ee-4a07-8ca0-7fed06690855,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924,PodSandboxId:291c01c08a0dd126f414c897b5e629dd2783c7863dee6970aa2224b4d87c6f64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730135273069132677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhnsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82fd776-0217-40e3-a973-146eb6cb0c5a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7,PodSandboxId:6c46a6fcf568d381a50f15fe68013bb900c664532d4d0cbdc00a808f312b46d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730135262033074139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeea8adae7e0b13f1e3d0d54789b73b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951,PodSandboxId:e8ce4f650c58eb14e3514393a7862b87079146b639b3c6caa05bf3c12753a0c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730135262050079599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe38b5b2cd2452b7f288f83be5d7b45,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0,PodSandboxId:98efa1525a4595ad52afb95dfc548bd19f28001e19867123ab4d6097da21b828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730135262035044703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630c780f4d19010429d7a7882b939d32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231,PodSandboxId:e8b2750535d7b77ff4d4f8a8675ab98cbce6c9eaa44b46ff70ebbe14cc4999e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730135262022724120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-186035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd434f163e3440c6351a99185234d04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d948e16-c252-48bb-98df-1b75d11c062a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b0058fa3b8e4f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   224dc2718dae7       hello-world-app-55bf9c44b4-jklgj
	5c42d8554f886       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   33e247f619c96       nginx
	ebd763de10a43       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   5756284ab9ccf       busybox
	1bc2426fe3759       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   8 minutes ago       Running             metrics-server            0                   848fd1506d897       metrics-server-84c5f94fbc-6vwqq
	cc0ea2ebc9079       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                8 minutes ago       Running             amd-gpu-device-plugin     0                   aa9c667b13f3a       amd-gpu-device-plugin-cmh8f
	118031ba1a771       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        9 minutes ago       Running             storage-provisioner       0                   08f45bd79f068       storage-provisioner
	614f092a6a9e0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        9 minutes ago       Running             coredns                   0                   5c0ea02eda904       coredns-7c65d6cfc9-znpww
	2369bc3d165e3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        9 minutes ago       Running             kube-proxy                0                   291c01c08a0dd       kube-proxy-qhnsh
	d09c6cd8e8adc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        9 minutes ago       Running             kube-controller-manager   0                   e8ce4f650c58e       kube-controller-manager-addons-186035
	2b168fbe99e03       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        9 minutes ago       Running             kube-scheduler            0                   98efa1525a459       kube-scheduler-addons-186035
	c537af4c03503       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        9 minutes ago       Running             etcd                      0                   6c46a6fcf568d       etcd-addons-186035
	deca3062b168e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        9 minutes ago       Running             kube-apiserver            0                   e8b2750535d7b       kube-apiserver-addons-186035
	
	
	==> coredns [614f092a6a9e060d0e14d55d26bade78d862f3becddff1435ce4cba661ff9c5d] <==
	[INFO] 10.244.0.22:60717 - 4231 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000134196s
	[INFO] 10.244.0.22:60717 - 772 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060227s
	[INFO] 10.244.0.22:60405 - 6903 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071174s
	[INFO] 10.244.0.22:60717 - 23789 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000154572s
	[INFO] 10.244.0.22:60405 - 11068 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062271s
	[INFO] 10.244.0.22:60717 - 8796 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000166254s
	[INFO] 10.244.0.22:60405 - 35666 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089504s
	[INFO] 10.244.0.22:60405 - 37885 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061459s
	[INFO] 10.244.0.22:60405 - 19831 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076474s
	[INFO] 10.244.0.22:60405 - 31396 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052534s
	[INFO] 10.244.0.22:60405 - 15333 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000092992s
	[INFO] 10.244.0.22:39625 - 36057 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092971s
	[INFO] 10.244.0.22:39625 - 4474 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058617s
	[INFO] 10.244.0.22:39625 - 53862 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051274s
	[INFO] 10.244.0.22:39625 - 27904 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045023s
	[INFO] 10.244.0.22:39625 - 49704 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002941s
	[INFO] 10.244.0.22:39625 - 50699 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030656s
	[INFO] 10.244.0.22:39625 - 60189 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000031217s
	[INFO] 10.244.0.22:50488 - 42558 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061329s
	[INFO] 10.244.0.22:50488 - 43029 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065427s
	[INFO] 10.244.0.22:50488 - 24722 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006568s
	[INFO] 10.244.0.22:50488 - 44512 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005825s
	[INFO] 10.244.0.22:50488 - 47621 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041206s
	[INFO] 10.244.0.22:50488 - 64498 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043959s
	[INFO] 10.244.0.22:50488 - 51436 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038293s
	
	
	==> describe nodes <==
	Name:               addons-186035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-186035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=addons-186035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_07_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-186035
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-186035
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:16:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:13:54 +0000   Mon, 28 Oct 2024 17:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:13:54 +0000   Mon, 28 Oct 2024 17:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:13:54 +0000   Mon, 28 Oct 2024 17:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:13:54 +0000   Mon, 28 Oct 2024 17:07:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    addons-186035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 05b49f430d1b4db0b0d719b6f9779dde
	  System UUID:                05b49f43-0d1b-4db0-b0d7-19b6f9779dde
	  Boot ID:                    61e165df-592d-406c-abb1-782959670d56
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  default                     hello-world-app-55bf9c44b4-jklgj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 amd-gpu-device-plugin-cmh8f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	  kube-system                 coredns-7c65d6cfc9-znpww                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m7s
	  kube-system                 etcd-addons-186035                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m14s
	  kube-system                 kube-apiserver-addons-186035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-controller-manager-addons-186035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-proxy-qhnsh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-addons-186035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 metrics-server-84c5f94fbc-6vwqq          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         9m2s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m12s  kubelet          Node addons-186035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s  kubelet          Node addons-186035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s  kubelet          Node addons-186035 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m11s  kubelet          Node addons-186035 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node addons-186035 event: Registered Node addons-186035 in Controller
	
	
	==> dmesg <==
	[  +5.664234] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.147381] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +4.853550] kauditd_printk_skb: 113 callbacks suppressed
	[Oct28 17:08] kauditd_printk_skb: 163 callbacks suppressed
	[  +8.487748] kauditd_printk_skb: 57 callbacks suppressed
	[ +32.742273] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 17:09] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.126833] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.434606] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.289413] kauditd_printk_skb: 28 callbacks suppressed
	[Oct28 17:10] kauditd_printk_skb: 3 callbacks suppressed
	[  +8.793506] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.205726] kauditd_printk_skb: 13 callbacks suppressed
	[ +16.063287] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 17:11] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.477576] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.010649] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.011520] kauditd_printk_skb: 45 callbacks suppressed
	[  +7.453289] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.958909] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.750746] kauditd_printk_skb: 38 callbacks suppressed
	[Oct28 17:12] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.922265] kauditd_printk_skb: 57 callbacks suppressed
	[Oct28 17:13] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.267487] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [c537af4c0350337a6fbb2d1cf4b879c76b843042443a8bebca84672733220ca7] <==
	{"level":"warn","ts":"2024-10-28T17:09:14.476672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.413917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T17:09:14.476765Z","caller":"traceutil/trace.go:171","msg":"trace[2121358349] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1027; }","duration":"354.514001ms","start":"2024-10-28T17:09:14.122238Z","end":"2024-10-28T17:09:14.476752Z","steps":["trace[2121358349] 'agreement among raft nodes before linearized reading'  (duration: 354.39216ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.476862Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.122209Z","time spent":"354.645474ms","remote":"127.0.0.1:58058","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":12,"response size":30,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	{"level":"warn","ts":"2024-10-28T17:09:14.477181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.315469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:09:14.477285Z","caller":"traceutil/trace.go:171","msg":"trace[721181895] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1027; }","duration":"375.42189ms","start":"2024-10-28T17:09:14.101855Z","end":"2024-10-28T17:09:14.477277Z","steps":["trace[721181895] 'agreement among raft nodes before linearized reading'  (duration: 375.243903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.477320Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.101826Z","time spent":"375.489256ms","remote":"127.0.0.1:58074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-28T17:09:14.478461Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.153479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:09:14.480491Z","caller":"traceutil/trace.go:171","msg":"trace[624477916] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1027; }","duration":"402.16463ms","start":"2024-10-28T17:09:14.078298Z","end":"2024-10-28T17:09:14.480462Z","steps":["trace[624477916] 'agreement among raft nodes before linearized reading'  (duration: 400.13886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.480767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.078256Z","time spent":"402.498013ms","remote":"127.0.0.1:58074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-28T17:09:14.480547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"401.877568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-6vwqq\" ","response":"range_response_count:1 size:4564"}
	{"level":"info","ts":"2024-10-28T17:09:14.481013Z","caller":"traceutil/trace.go:171","msg":"trace[588713358] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-6vwqq; range_end:; response_count:1; response_revision:1027; }","duration":"402.678565ms","start":"2024-10-28T17:09:14.078326Z","end":"2024-10-28T17:09:14.481005Z","steps":["trace[588713358] 'agreement among raft nodes before linearized reading'  (duration: 399.469595ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:09:14.481057Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:09:14.078311Z","time spent":"402.737496ms","remote":"127.0.0.1:58074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4587,"request content":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-6vwqq\" "}
	{"level":"info","ts":"2024-10-28T17:09:57.386676Z","caller":"traceutil/trace.go:171","msg":"trace[1893733462] transaction","detail":"{read_only:false; response_revision:1168; number_of_response:1; }","duration":"269.910014ms","start":"2024-10-28T17:09:57.116753Z","end":"2024-10-28T17:09:57.386663Z","steps":["trace[1893733462] 'process raft request'  (duration: 269.311991ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:10:18.289247Z","caller":"traceutil/trace.go:171","msg":"trace[1543686112] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1239; }","duration":"252.746722ms","start":"2024-10-28T17:10:18.036485Z","end":"2024-10-28T17:10:18.289232Z","steps":["trace[1543686112] 'read index received'  (duration: 252.628617ms)","trace[1543686112] 'applied index is now lower than readState.Index'  (duration: 117.708µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T17:10:18.289515Z","caller":"traceutil/trace.go:171","msg":"trace[1386694662] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"287.374118ms","start":"2024-10-28T17:10:18.002126Z","end":"2024-10-28T17:10:18.289500Z","steps":["trace[1386694662] 'process raft request'  (duration: 287.024229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:10:18.289594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.759328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:10:18.290266Z","caller":"traceutil/trace.go:171","msg":"trace[1924234898] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"191.484592ms","start":"2024-10-28T17:10:18.098770Z","end":"2024-10-28T17:10:18.290255Z","steps":["trace[1924234898] 'agreement among raft nodes before linearized reading'  (duration: 190.740891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:10:18.289652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.166755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:10:18.290366Z","caller":"traceutil/trace.go:171","msg":"trace[224914482] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"253.872974ms","start":"2024-10-28T17:10:18.036481Z","end":"2024-10-28T17:10:18.290354Z","steps":["trace[224914482] 'agreement among raft nodes before linearized reading'  (duration: 253.15913ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:11:02.896945Z","caller":"traceutil/trace.go:171","msg":"trace[1650506679] transaction","detail":"{read_only:false; response_revision:1392; number_of_response:1; }","duration":"363.693976ms","start":"2024-10-28T17:11:02.533222Z","end":"2024-10-28T17:11:02.896916Z","steps":["trace[1650506679] 'process raft request'  (duration: 363.360466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T17:11:02.898061Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T17:11:02.533209Z","time spent":"364.037705ms","remote":"127.0.0.1:32806","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1380 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-28T17:12:13.379135Z","caller":"traceutil/trace.go:171","msg":"trace[641105911] linearizableReadLoop","detail":"{readStateIndex:1946; appliedIndex:1945; }","duration":"206.174523ms","start":"2024-10-28T17:12:13.172941Z","end":"2024-10-28T17:12:13.379115Z","steps":["trace[641105911] 'read index received'  (duration: 206.032011ms)","trace[641105911] 'applied index is now lower than readState.Index'  (duration: 142.122µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T17:12:13.379287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.31552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-resizer-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T17:12:13.379309Z","caller":"traceutil/trace.go:171","msg":"trace[1805250495] range","detail":"{range_begin:/registry/roles/kube-system/external-resizer-cfg; range_end:; response_count:0; response_revision:1868; }","duration":"206.389516ms","start":"2024-10-28T17:12:13.172914Z","end":"2024-10-28T17:12:13.379304Z","steps":["trace[1805250495] 'agreement among raft nodes before linearized reading'  (duration: 206.272199ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T17:12:13.379693Z","caller":"traceutil/trace.go:171","msg":"trace[63220696] transaction","detail":"{read_only:false; response_revision:1868; number_of_response:1; }","duration":"281.43557ms","start":"2024-10-28T17:12:13.098221Z","end":"2024-10-28T17:12:13.379657Z","steps":["trace[63220696] 'process raft request'  (duration: 280.79272ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:17:00 up 9 min,  0 users,  load average: 0.16, 0.48, 0.39
	Linux addons-186035 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [deca3062b168ea02fa6b7acb8e85f16ec61f8229b6f5ba424611bede74dc0231] <==
	E1028 17:09:48.532473       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.210.69:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.210.69:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.210.69:443: connect: connection refused" logger="UnhandledError"
	I1028 17:09:48.640184       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 17:10:46.563644       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8443->192.168.39.1:57084: use of closed network connection
	E1028 17:10:46.747002       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8443->192.168.39.1:57098: use of closed network connection
	I1028 17:10:55.925121       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.194.185"}
	I1028 17:11:07.346841       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 17:11:08.375464       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 17:11:13.018175       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 17:11:13.196569       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.35.159"}
	I1028 17:11:49.231716       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1028 17:11:51.635296       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1028 17:12:08.764581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.764693       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.781206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.781267       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.814601       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.814658       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.920200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.920763       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 17:12:08.927363       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 17:12:08.927464       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 17:12:09.927812       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 17:12:09.927876       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 17:12:09.941482       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 17:13:36.771002       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.115.220"}
	
	
	==> kube-controller-manager [d09c6cd8e8adcd2b1f4ab2cc13ff42dad56018474d9c1edd65fa55816e678951] <==
	E1028 17:14:39.388108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:14:46.359196       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:14:46.359251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:14:59.196432       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:14:59.196512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:17.541023       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:17.541099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:23.977827       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:23.977952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:26.087497       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:26.087606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:15:55.518668       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:15:55.518858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:03.496641       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:03.496768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:05.928707       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:05.928762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:19.812315       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:19.812361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:29.666890       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:29.667009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:36.370532       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:36.370682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 17:16:49.628080       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 17:16:49.628143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [2369bc3d165e336a3599abb7daa6c3164ef44fae6ad3f160879e62c681908924] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 17:07:53.707184       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 17:07:53.721588       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.15"]
	E1028 17:07:53.721667       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:07:53.796656       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 17:07:53.796707       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 17:07:53.796742       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:07:53.801187       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:07:53.801543       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:07:53.801569       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:07:53.802664       1 config.go:199] "Starting service config controller"
	I1028 17:07:53.802681       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:07:53.802712       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:07:53.802716       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:07:53.808441       1 config.go:328] "Starting node config controller"
	I1028 17:07:53.808455       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:07:53.902800       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:07:53.902869       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:07:53.908652       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b168fbe99e03940a10406b90ef6a0cb11a9e8f60e310c7fd6a81e9dfbff70d0] <==
	W1028 17:07:44.369744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:44.369773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:44.369816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:07:44.369844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:44.369952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 17:07:44.370028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.183567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.183685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.247520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:07:45.247608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.342907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 17:07:45.342985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.351805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:07:45.352649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.353589       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 17:07:45.353647       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 17:07:45.357613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.357668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.461875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.461906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.495562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:07:45.495611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:07:45.523940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 17:07:45.524042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 17:07:48.461053       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 17:15:47 addons-186035 kubelet[1202]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 17:15:47 addons-186035 kubelet[1202]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 17:15:47 addons-186035 kubelet[1202]: E1028 17:15:47.261349    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135747260800593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:47 addons-186035 kubelet[1202]: E1028 17:15:47.261428    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135747260800593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:48 addons-186035 kubelet[1202]: I1028 17:15:48.009710    1202 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cmh8f" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 17:15:57 addons-186035 kubelet[1202]: E1028 17:15:57.268605    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135757268110518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:15:57 addons-186035 kubelet[1202]: E1028 17:15:57.268955    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135757268110518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:07 addons-186035 kubelet[1202]: E1028 17:16:07.272016    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135767271176975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:07 addons-186035 kubelet[1202]: E1028 17:16:07.272128    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135767271176975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:17 addons-186035 kubelet[1202]: E1028 17:16:17.274649    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135777274229401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:17 addons-186035 kubelet[1202]: E1028 17:16:17.275073    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135777274229401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:27 addons-186035 kubelet[1202]: E1028 17:16:27.277294    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135787276946764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:27 addons-186035 kubelet[1202]: E1028 17:16:27.277333    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135787276946764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:37 addons-186035 kubelet[1202]: E1028 17:16:37.285568    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135797284760024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:37 addons-186035 kubelet[1202]: E1028 17:16:37.285670    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135797284760024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:47 addons-186035 kubelet[1202]: E1028 17:16:47.025486    1202 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 17:16:47 addons-186035 kubelet[1202]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 17:16:47 addons-186035 kubelet[1202]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 17:16:47 addons-186035 kubelet[1202]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 17:16:47 addons-186035 kubelet[1202]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 17:16:47 addons-186035 kubelet[1202]: E1028 17:16:47.287875    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135807287575699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:47 addons-186035 kubelet[1202]: E1028 17:16:47.288076    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135807287575699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:54 addons-186035 kubelet[1202]: I1028 17:16:54.009650    1202 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 17:16:57 addons-186035 kubelet[1202]: E1028 17:16:57.290544    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135817290120130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:16:57 addons-186035 kubelet[1202]: E1028 17:16:57.290835    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730135817290120130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [118031ba1a771a1f1c39bff1674b6685649f77caa5beea18ef663703f51473d6] <==
	I1028 17:08:00.253709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 17:08:00.337586       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 17:08:00.337658       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 17:08:00.398162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 17:08:00.398893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d2ab1a8-d417-4ce4-b56c-459b458982ae", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-186035_7e4b825e-773b-4140-bb78-7cb2a9a6ef9e became leader
	I1028 17:08:00.398933       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-186035_7e4b825e-773b-4140-bb78-7cb2a9a6ef9e!
	I1028 17:08:00.811127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-186035_7e4b825e-773b-4140-bb78-7cb2a9a6ef9e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-186035 -n addons-186035
helpers_test.go:261: (dbg) Run:  kubectl --context addons-186035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (366.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-186035
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-186035: exit status 82 (2m0.436443005s)

                                                
                                                
-- stdout --
	* Stopping node "addons-186035"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-186035" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-186035
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-186035: exit status 11 (21.576151964s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-186035" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-186035
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-186035: exit status 11 (6.144308375s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-186035" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-186035
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-186035: exit status 11 (6.14335566s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-186035" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 node stop m02 -v=7 --alsologtostderr
E1028 17:30:00.335686   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:30:33.436036   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:31:22.257681   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-381619 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.460805623s)

                                                
                                                
-- stdout --
	* Stopping node "ha-381619-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:29:24.648050   36112 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:29:24.648200   36112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:29:24.648209   36112 out.go:358] Setting ErrFile to fd 2...
	I1028 17:29:24.648214   36112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:29:24.648372   36112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:29:24.648629   36112 mustload.go:65] Loading cluster: ha-381619
	I1028 17:29:24.648989   36112 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:29:24.649003   36112 stop.go:39] StopHost: ha-381619-m02
	I1028 17:29:24.649337   36112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:29:24.649382   36112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:29:24.664665   36112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41933
	I1028 17:29:24.665137   36112 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:29:24.665695   36112 main.go:141] libmachine: Using API Version  1
	I1028 17:29:24.665721   36112 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:29:24.666075   36112 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:29:24.668122   36112 out.go:177] * Stopping node "ha-381619-m02"  ...
	I1028 17:29:24.669361   36112 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 17:29:24.669385   36112 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:29:24.669577   36112 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 17:29:24.669599   36112 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:29:24.672060   36112 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:29:24.672431   36112 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:29:24.672460   36112 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:29:24.672589   36112 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:29:24.672715   36112 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:29:24.672844   36112 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:29:24.672947   36112 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:29:24.760574   36112 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 17:29:24.813523   36112 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 17:29:24.870625   36112 main.go:141] libmachine: Stopping "ha-381619-m02"...
	I1028 17:29:24.870662   36112 main.go:141] libmachine: (ha-381619-m02) Calling .GetState
	I1028 17:29:24.871981   36112 main.go:141] libmachine: (ha-381619-m02) Calling .Stop
	I1028 17:29:24.875357   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 0/120
	I1028 17:29:25.877506   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 1/120
	I1028 17:29:26.879343   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 2/120
	I1028 17:29:27.880394   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 3/120
	I1028 17:29:28.881722   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 4/120
	I1028 17:29:29.883578   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 5/120
	I1028 17:29:30.884854   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 6/120
	I1028 17:29:31.886984   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 7/120
	I1028 17:29:32.888150   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 8/120
	I1028 17:29:33.889926   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 9/120
	I1028 17:29:34.891262   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 10/120
	I1028 17:29:35.892580   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 11/120
	I1028 17:29:36.893761   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 12/120
	I1028 17:29:37.894994   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 13/120
	I1028 17:29:38.896238   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 14/120
	I1028 17:29:39.898134   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 15/120
	I1028 17:29:40.899650   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 16/120
	I1028 17:29:41.900902   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 17/120
	I1028 17:29:42.902781   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 18/120
	I1028 17:29:43.904134   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 19/120
	I1028 17:29:44.906024   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 20/120
	I1028 17:29:45.907166   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 21/120
	I1028 17:29:46.908439   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 22/120
	I1028 17:29:47.909591   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 23/120
	I1028 17:29:48.911073   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 24/120
	I1028 17:29:49.912408   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 25/120
	I1028 17:29:50.913525   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 26/120
	I1028 17:29:51.914887   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 27/120
	I1028 17:29:52.916135   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 28/120
	I1028 17:29:53.917402   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 29/120
	I1028 17:29:54.918832   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 30/120
	I1028 17:29:55.920437   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 31/120
	I1028 17:29:56.921861   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 32/120
	I1028 17:29:57.923022   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 33/120
	I1028 17:29:58.924393   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 34/120
	I1028 17:29:59.926074   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 35/120
	I1028 17:30:00.927443   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 36/120
	I1028 17:30:01.928821   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 37/120
	I1028 17:30:02.930122   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 38/120
	I1028 17:30:03.931538   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 39/120
	I1028 17:30:04.933564   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 40/120
	I1028 17:30:05.934942   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 41/120
	I1028 17:30:06.936204   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 42/120
	I1028 17:30:07.937511   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 43/120
	I1028 17:30:08.938873   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 44/120
	I1028 17:30:09.940523   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 45/120
	I1028 17:30:10.941860   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 46/120
	I1028 17:30:11.944197   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 47/120
	I1028 17:30:12.945650   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 48/120
	I1028 17:30:13.947692   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 49/120
	I1028 17:30:14.949575   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 50/120
	I1028 17:30:15.951734   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 51/120
	I1028 17:30:16.953102   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 52/120
	I1028 17:30:17.954934   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 53/120
	I1028 17:30:18.957055   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 54/120
	I1028 17:30:19.958433   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 55/120
	I1028 17:30:20.959863   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 56/120
	I1028 17:30:21.961098   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 57/120
	I1028 17:30:22.962934   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 58/120
	I1028 17:30:23.964137   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 59/120
	I1028 17:30:24.966128   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 60/120
	I1028 17:30:25.967329   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 61/120
	I1028 17:30:26.968521   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 62/120
	I1028 17:30:27.970743   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 63/120
	I1028 17:30:28.971932   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 64/120
	I1028 17:30:29.973736   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 65/120
	I1028 17:30:30.975017   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 66/120
	I1028 17:30:31.976947   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 67/120
	I1028 17:30:32.978213   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 68/120
	I1028 17:30:33.979906   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 69/120
	I1028 17:30:34.981330   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 70/120
	I1028 17:30:35.982540   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 71/120
	I1028 17:30:36.983911   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 72/120
	I1028 17:30:37.985168   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 73/120
	I1028 17:30:38.986776   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 74/120
	I1028 17:30:39.988592   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 75/120
	I1028 17:30:40.990006   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 76/120
	I1028 17:30:41.991926   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 77/120
	I1028 17:30:42.993318   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 78/120
	I1028 17:30:43.994569   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 79/120
	I1028 17:30:44.996773   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 80/120
	I1028 17:30:45.998870   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 81/120
	I1028 17:30:47.000087   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 82/120
	I1028 17:30:48.002153   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 83/120
	I1028 17:30:49.003575   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 84/120
	I1028 17:30:50.005413   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 85/120
	I1028 17:30:51.006982   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 86/120
	I1028 17:30:52.009253   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 87/120
	I1028 17:30:53.011698   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 88/120
	I1028 17:30:54.013038   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 89/120
	I1028 17:30:55.014544   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 90/120
	I1028 17:30:56.015987   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 91/120
	I1028 17:30:57.017359   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 92/120
	I1028 17:30:58.018888   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 93/120
	I1028 17:30:59.020554   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 94/120
	I1028 17:31:00.021824   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 95/120
	I1028 17:31:01.023220   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 96/120
	I1028 17:31:02.024582   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 97/120
	I1028 17:31:03.025686   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 98/120
	I1028 17:31:04.027134   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 99/120
	I1028 17:31:05.028436   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 100/120
	I1028 17:31:06.029620   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 101/120
	I1028 17:31:07.030880   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 102/120
	I1028 17:31:08.033077   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 103/120
	I1028 17:31:09.034944   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 104/120
	I1028 17:31:10.036697   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 105/120
	I1028 17:31:11.038332   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 106/120
	I1028 17:31:12.039718   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 107/120
	I1028 17:31:13.041060   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 108/120
	I1028 17:31:14.042306   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 109/120
	I1028 17:31:15.044321   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 110/120
	I1028 17:31:16.045493   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 111/120
	I1028 17:31:17.046811   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 112/120
	I1028 17:31:18.048176   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 113/120
	I1028 17:31:19.049802   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 114/120
	I1028 17:31:20.051853   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 115/120
	I1028 17:31:21.053804   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 116/120
	I1028 17:31:22.055159   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 117/120
	I1028 17:31:23.056500   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 118/120
	I1028 17:31:24.059040   36112 main.go:141] libmachine: (ha-381619-m02) Waiting for machine to stop 119/120
	I1028 17:31:25.060110   36112 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 17:31:25.060247   36112 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-381619 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr: (18.716331119s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-381619 -n ha-381619
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 logs -n 25: (1.335181638s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m03_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m04 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp testdata/cp-test.txt                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m04_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03:/home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m03 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-381619 node stop m02 -v=7                                                    | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:24:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:24:32.704402   32020 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:24:32.704551   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704563   32020 out.go:358] Setting ErrFile to fd 2...
	I1028 17:24:32.704569   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704718   32020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:24:32.705246   32020 out.go:352] Setting JSON to false
	I1028 17:24:32.706049   32020 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4016,"bootTime":1730132257,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:24:32.706140   32020 start.go:139] virtualization: kvm guest
	I1028 17:24:32.708076   32020 out.go:177] * [ha-381619] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:24:32.709709   32020 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:24:32.709708   32020 notify.go:220] Checking for updates...
	I1028 17:24:32.711979   32020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:24:32.713179   32020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:24:32.714308   32020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.715427   32020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:24:32.716562   32020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:24:32.717898   32020 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:24:32.750233   32020 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 17:24:32.751376   32020 start.go:297] selected driver: kvm2
	I1028 17:24:32.751386   32020 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:24:32.751396   32020 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:24:32.752108   32020 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.752174   32020 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:24:32.765779   32020 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:24:32.765818   32020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:24:32.766066   32020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:24:32.766095   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:24:32.766149   32020 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 17:24:32.766159   32020 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:24:32.766215   32020 start.go:340] cluster config:
	{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 17:24:32.766343   32020 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.768753   32020 out.go:177] * Starting "ha-381619" primary control-plane node in "ha-381619" cluster
	I1028 17:24:32.769947   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:32.769974   32020 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:24:32.769982   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:24:32.770049   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:24:32.770062   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:24:32.770342   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:32.770362   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json: {Name:mkd5c3a5f97562236390379745e09449a8badb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:24:32.770497   32020 start.go:360] acquireMachinesLock for ha-381619: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:24:32.770539   32020 start.go:364] duration metric: took 26.277µs to acquireMachinesLock for "ha-381619"
	I1028 17:24:32.770561   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:24:32.770606   32020 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 17:24:32.772872   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:24:32.772986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:32.773028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:32.786246   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I1028 17:24:32.786651   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:32.787204   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:24:32.787223   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:32.787585   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:32.787761   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:32.787890   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:32.788041   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:24:32.788072   32020 client.go:168] LocalClient.Create starting
	I1028 17:24:32.788105   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:24:32.788134   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788152   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788202   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:24:32.788220   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788232   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788246   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:24:32.788258   32020 main.go:141] libmachine: (ha-381619) Calling .PreCreateCheck
	I1028 17:24:32.788587   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:32.789017   32020 main.go:141] libmachine: Creating machine...
	I1028 17:24:32.789034   32020 main.go:141] libmachine: (ha-381619) Calling .Create
	I1028 17:24:32.789161   32020 main.go:141] libmachine: (ha-381619) Creating KVM machine...
	I1028 17:24:32.790254   32020 main.go:141] libmachine: (ha-381619) DBG | found existing default KVM network
	I1028 17:24:32.790889   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.790760   32043 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1028 17:24:32.790924   32020 main.go:141] libmachine: (ha-381619) DBG | created network xml: 
	I1028 17:24:32.790942   32020 main.go:141] libmachine: (ha-381619) DBG | <network>
	I1028 17:24:32.790953   32020 main.go:141] libmachine: (ha-381619) DBG |   <name>mk-ha-381619</name>
	I1028 17:24:32.790960   32020 main.go:141] libmachine: (ha-381619) DBG |   <dns enable='no'/>
	I1028 17:24:32.790971   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.790981   32020 main.go:141] libmachine: (ha-381619) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 17:24:32.791022   32020 main.go:141] libmachine: (ha-381619) DBG |     <dhcp>
	I1028 17:24:32.791042   32020 main.go:141] libmachine: (ha-381619) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 17:24:32.791053   32020 main.go:141] libmachine: (ha-381619) DBG |     </dhcp>
	I1028 17:24:32.791062   32020 main.go:141] libmachine: (ha-381619) DBG |   </ip>
	I1028 17:24:32.791070   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.791079   32020 main.go:141] libmachine: (ha-381619) DBG | </network>
	I1028 17:24:32.791092   32020 main.go:141] libmachine: (ha-381619) DBG | 
	I1028 17:24:32.795776   32020 main.go:141] libmachine: (ha-381619) DBG | trying to create private KVM network mk-ha-381619 192.168.39.0/24...
	I1028 17:24:32.856590   32020 main.go:141] libmachine: (ha-381619) DBG | private KVM network mk-ha-381619 192.168.39.0/24 created
	I1028 17:24:32.856623   32020 main.go:141] libmachine: (ha-381619) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:32.856641   32020 main.go:141] libmachine: (ha-381619) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:24:32.856686   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.856608   32043 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.856733   32020 main.go:141] libmachine: (ha-381619) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:24:33.109141   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.109021   32043 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa...
	I1028 17:24:33.382423   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382288   32043 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk...
	I1028 17:24:33.382457   32020 main.go:141] libmachine: (ha-381619) DBG | Writing magic tar header
	I1028 17:24:33.382473   32020 main.go:141] libmachine: (ha-381619) DBG | Writing SSH key tar header
	I1028 17:24:33.382487   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382434   32043 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:33.382577   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 (perms=drwx------)
	I1028 17:24:33.382600   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:24:33.382611   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619
	I1028 17:24:33.382624   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:24:33.382636   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:33.382651   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:24:33.382662   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:24:33.382673   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:24:33.382683   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:24:33.382696   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:24:33.382710   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:24:33.382720   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:24:33.382733   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:33.382743   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home
	I1028 17:24:33.382755   32020 main.go:141] libmachine: (ha-381619) DBG | Skipping /home - not owner
	I1028 17:24:33.383729   32020 main.go:141] libmachine: (ha-381619) define libvirt domain using xml: 
	I1028 17:24:33.383753   32020 main.go:141] libmachine: (ha-381619) <domain type='kvm'>
	I1028 17:24:33.383763   32020 main.go:141] libmachine: (ha-381619)   <name>ha-381619</name>
	I1028 17:24:33.383771   32020 main.go:141] libmachine: (ha-381619)   <memory unit='MiB'>2200</memory>
	I1028 17:24:33.383782   32020 main.go:141] libmachine: (ha-381619)   <vcpu>2</vcpu>
	I1028 17:24:33.383791   32020 main.go:141] libmachine: (ha-381619)   <features>
	I1028 17:24:33.383800   32020 main.go:141] libmachine: (ha-381619)     <acpi/>
	I1028 17:24:33.383823   32020 main.go:141] libmachine: (ha-381619)     <apic/>
	I1028 17:24:33.383834   32020 main.go:141] libmachine: (ha-381619)     <pae/>
	I1028 17:24:33.383847   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.383857   32020 main.go:141] libmachine: (ha-381619)   </features>
	I1028 17:24:33.383868   32020 main.go:141] libmachine: (ha-381619)   <cpu mode='host-passthrough'>
	I1028 17:24:33.383876   32020 main.go:141] libmachine: (ha-381619)   
	I1028 17:24:33.383886   32020 main.go:141] libmachine: (ha-381619)   </cpu>
	I1028 17:24:33.383894   32020 main.go:141] libmachine: (ha-381619)   <os>
	I1028 17:24:33.383901   32020 main.go:141] libmachine: (ha-381619)     <type>hvm</type>
	I1028 17:24:33.383912   32020 main.go:141] libmachine: (ha-381619)     <boot dev='cdrom'/>
	I1028 17:24:33.383921   32020 main.go:141] libmachine: (ha-381619)     <boot dev='hd'/>
	I1028 17:24:33.383934   32020 main.go:141] libmachine: (ha-381619)     <bootmenu enable='no'/>
	I1028 17:24:33.383944   32020 main.go:141] libmachine: (ha-381619)   </os>
	I1028 17:24:33.383952   32020 main.go:141] libmachine: (ha-381619)   <devices>
	I1028 17:24:33.383961   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='cdrom'>
	I1028 17:24:33.383974   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/boot2docker.iso'/>
	I1028 17:24:33.383984   32020 main.go:141] libmachine: (ha-381619)       <target dev='hdc' bus='scsi'/>
	I1028 17:24:33.383994   32020 main.go:141] libmachine: (ha-381619)       <readonly/>
	I1028 17:24:33.384049   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384071   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='disk'>
	I1028 17:24:33.384079   32020 main.go:141] libmachine: (ha-381619)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:24:33.384087   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk'/>
	I1028 17:24:33.384092   32020 main.go:141] libmachine: (ha-381619)       <target dev='hda' bus='virtio'/>
	I1028 17:24:33.384099   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384104   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384111   32020 main.go:141] libmachine: (ha-381619)       <source network='mk-ha-381619'/>
	I1028 17:24:33.384116   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384122   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384127   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384134   32020 main.go:141] libmachine: (ha-381619)       <source network='default'/>
	I1028 17:24:33.384140   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384146   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384151   32020 main.go:141] libmachine: (ha-381619)     <serial type='pty'>
	I1028 17:24:33.384157   32020 main.go:141] libmachine: (ha-381619)       <target port='0'/>
	I1028 17:24:33.384180   32020 main.go:141] libmachine: (ha-381619)     </serial>
	I1028 17:24:33.384203   32020 main.go:141] libmachine: (ha-381619)     <console type='pty'>
	I1028 17:24:33.384217   32020 main.go:141] libmachine: (ha-381619)       <target type='serial' port='0'/>
	I1028 17:24:33.384235   32020 main.go:141] libmachine: (ha-381619)     </console>
	I1028 17:24:33.384247   32020 main.go:141] libmachine: (ha-381619)     <rng model='virtio'>
	I1028 17:24:33.384258   32020 main.go:141] libmachine: (ha-381619)       <backend model='random'>/dev/random</backend>
	I1028 17:24:33.384267   32020 main.go:141] libmachine: (ha-381619)     </rng>
	I1028 17:24:33.384291   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384303   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384320   32020 main.go:141] libmachine: (ha-381619)   </devices>
	I1028 17:24:33.384331   32020 main.go:141] libmachine: (ha-381619) </domain>
	I1028 17:24:33.384339   32020 main.go:141] libmachine: (ha-381619) 
	I1028 17:24:33.388368   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:d7:31:89 in network default
	I1028 17:24:33.388983   32020 main.go:141] libmachine: (ha-381619) Ensuring networks are active...
	I1028 17:24:33.389001   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:33.389577   32020 main.go:141] libmachine: (ha-381619) Ensuring network default is active
	I1028 17:24:33.389893   32020 main.go:141] libmachine: (ha-381619) Ensuring network mk-ha-381619 is active
	I1028 17:24:33.390366   32020 main.go:141] libmachine: (ha-381619) Getting domain xml...
	I1028 17:24:33.390966   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:34.558865   32020 main.go:141] libmachine: (ha-381619) Waiting to get IP...
	I1028 17:24:34.559610   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.559962   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.559982   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.559945   32043 retry.go:31] will retry after 257.179075ms: waiting for machine to come up
	I1028 17:24:34.818320   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.818636   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.818664   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.818591   32043 retry.go:31] will retry after 336.999416ms: waiting for machine to come up
	I1028 17:24:35.156955   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.157385   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.157410   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.157352   32043 retry.go:31] will retry after 376.336351ms: waiting for machine to come up
	I1028 17:24:35.534739   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.535148   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.535176   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.535109   32043 retry.go:31] will retry after 414.103212ms: waiting for machine to come up
	I1028 17:24:35.950512   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.950871   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.950902   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.950833   32043 retry.go:31] will retry after 701.752446ms: waiting for machine to come up
	I1028 17:24:36.653573   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:36.653919   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:36.653945   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:36.653879   32043 retry.go:31] will retry after 793.432647ms: waiting for machine to come up
	I1028 17:24:37.448827   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:37.449212   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:37.449233   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:37.449175   32043 retry.go:31] will retry after 894.965011ms: waiting for machine to come up
	I1028 17:24:38.345655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:38.346083   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:38.346104   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:38.346040   32043 retry.go:31] will retry after 955.035568ms: waiting for machine to come up
	I1028 17:24:39.303112   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:39.303513   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:39.303566   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:39.303470   32043 retry.go:31] will retry after 1.649236041s: waiting for machine to come up
	I1028 17:24:40.955622   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:40.956156   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:40.956183   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:40.956118   32043 retry.go:31] will retry after 1.776451571s: waiting for machine to come up
	I1028 17:24:42.733883   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:42.734354   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:42.734378   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:42.734330   32043 retry.go:31] will retry after 2.290450392s: waiting for machine to come up
	I1028 17:24:45.027299   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:45.027697   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:45.027727   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:45.027647   32043 retry.go:31] will retry after 3.000171726s: waiting for machine to come up
	I1028 17:24:48.029293   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:48.029625   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:48.029642   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:48.029599   32043 retry.go:31] will retry after 3.464287385s: waiting for machine to come up
	I1028 17:24:51.498145   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:51.498494   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:51.498520   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:51.498450   32043 retry.go:31] will retry after 4.798676944s: waiting for machine to come up
	I1028 17:24:56.301062   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301461   32020 main.go:141] libmachine: (ha-381619) Found IP for machine: 192.168.39.230
	I1028 17:24:56.301476   32020 main.go:141] libmachine: (ha-381619) Reserving static IP address...
	I1028 17:24:56.301485   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has current primary IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301800   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find host DHCP lease matching {name: "ha-381619", mac: "52:54:00:bf:e3:f2", ip: "192.168.39.230"} in network mk-ha-381619
	I1028 17:24:56.367996   32020 main.go:141] libmachine: (ha-381619) Reserved static IP address: 192.168.39.230
	I1028 17:24:56.368025   32020 main.go:141] libmachine: (ha-381619) Waiting for SSH to be available...
	I1028 17:24:56.368033   32020 main.go:141] libmachine: (ha-381619) DBG | Getting to WaitForSSH function...
	I1028 17:24:56.370488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.370848   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.370872   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.371022   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH client type: external
	I1028 17:24:56.371056   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa (-rw-------)
	I1028 17:24:56.371091   32020 main.go:141] libmachine: (ha-381619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:24:56.371104   32020 main.go:141] libmachine: (ha-381619) DBG | About to run SSH command:
	I1028 17:24:56.371114   32020 main.go:141] libmachine: (ha-381619) DBG | exit 0
	I1028 17:24:56.492195   32020 main.go:141] libmachine: (ha-381619) DBG | SSH cmd err, output: <nil>: 
	I1028 17:24:56.492449   32020 main.go:141] libmachine: (ha-381619) KVM machine creation complete!
	I1028 17:24:56.492777   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:56.493326   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493514   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493649   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:24:56.493664   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:24:56.494850   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:24:56.494862   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:24:56.494867   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:24:56.494872   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.496787   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497152   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.497174   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497302   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.497464   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497595   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497725   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.497885   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.498064   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.498078   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:24:56.595488   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.595509   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:24:56.595519   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.597859   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598187   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.598209   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598403   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.598582   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598880   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.599036   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.599254   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.599265   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:24:56.696771   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:24:56.696858   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:24:56.696872   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:24:56.696881   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697109   32020 buildroot.go:166] provisioning hostname "ha-381619"
	I1028 17:24:56.697130   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697282   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.699770   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700115   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.700139   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700271   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.700441   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700571   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700701   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.700825   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.701013   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.701029   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619 && echo "ha-381619" | sudo tee /etc/hostname
	I1028 17:24:56.814628   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:24:56.814655   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.817104   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817470   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.817491   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817657   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.817827   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.817992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.818124   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.818278   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.818455   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.818475   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:24:56.926794   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.926821   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:24:56.926841   32020 buildroot.go:174] setting up certificates
	I1028 17:24:56.926853   32020 provision.go:84] configureAuth start
	I1028 17:24:56.926865   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.927086   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:56.929479   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929816   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.929835   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929984   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.931934   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932225   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.932249   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932384   32020 provision.go:143] copyHostCerts
	I1028 17:24:56.932411   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932452   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:24:56.932465   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932554   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:24:56.932658   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932682   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:24:56.932692   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932731   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:24:56.932840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932873   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:24:56.932883   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932921   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:24:56.933013   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619 san=[127.0.0.1 192.168.39.230 ha-381619 localhost minikube]
	I1028 17:24:57.000217   32020 provision.go:177] copyRemoteCerts
	I1028 17:24:57.000264   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:24:57.000288   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.002585   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.002859   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.002887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.003010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.003192   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.003327   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.003456   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.082327   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:24:57.082386   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:24:57.108992   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:24:57.109040   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 17:24:57.131168   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:24:57.131225   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:24:57.153241   32020 provision.go:87] duration metric: took 226.378501ms to configureAuth
	I1028 17:24:57.153264   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:24:57.153419   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:24:57.153491   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.155887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156229   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.156255   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156416   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.156589   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156751   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156909   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.157032   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.157170   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.157183   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:24:57.371091   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:24:57.371116   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:24:57.371138   32020 main.go:141] libmachine: (ha-381619) Calling .GetURL
	I1028 17:24:57.372265   32020 main.go:141] libmachine: (ha-381619) DBG | Using libvirt version 6000000
	I1028 17:24:57.374388   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374694   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.374715   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374887   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:24:57.374900   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:24:57.374907   32020 client.go:171] duration metric: took 24.586826396s to LocalClient.Create
	I1028 17:24:57.374929   32020 start.go:167] duration metric: took 24.586887382s to libmachine.API.Create "ha-381619"
	I1028 17:24:57.374942   32020 start.go:293] postStartSetup for "ha-381619" (driver="kvm2")
	I1028 17:24:57.374957   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:24:57.374978   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.375196   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:24:57.375226   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.377231   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377544   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.377561   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377690   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.377841   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.378010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.378127   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.458768   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:24:57.463205   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:24:57.463222   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:24:57.463283   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:24:57.463370   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:24:57.463382   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:24:57.463492   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:24:57.473092   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:24:57.499838   32020 start.go:296] duration metric: took 124.881379ms for postStartSetup
	I1028 17:24:57.499880   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:57.500412   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.502520   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.502817   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.502846   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.503009   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:57.503210   32020 start.go:128] duration metric: took 24.732586487s to createHost
	I1028 17:24:57.503234   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.505276   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505578   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.505602   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505703   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.505855   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.505992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.506115   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.506245   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.506406   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.506418   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:24:57.608878   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136297.586420313
	
	I1028 17:24:57.608900   32020 fix.go:216] guest clock: 1730136297.586420313
	I1028 17:24:57.608919   32020 fix.go:229] Guest: 2024-10-28 17:24:57.586420313 +0000 UTC Remote: 2024-10-28 17:24:57.503223131 +0000 UTC m=+24.834191366 (delta=83.197182ms)
	I1028 17:24:57.608956   32020 fix.go:200] guest clock delta is within tolerance: 83.197182ms
	I1028 17:24:57.608963   32020 start.go:83] releasing machines lock for "ha-381619", held for 24.838412899s
	I1028 17:24:57.608987   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.609175   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.611488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611798   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.611830   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611946   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612411   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612586   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612684   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:24:57.612719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.612770   32020 ssh_runner.go:195] Run: cat /version.json
	I1028 17:24:57.612787   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.615260   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615428   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615614   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615648   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615673   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615698   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615759   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615940   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615944   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616269   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616272   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.616376   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.711561   32020 ssh_runner.go:195] Run: systemctl --version
	I1028 17:24:57.717385   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:24:57.881204   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:24:57.887117   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:24:57.887178   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:24:57.902953   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:24:57.902971   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:24:57.903029   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:24:57.919599   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:24:57.932865   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:24:57.932911   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:24:57.945714   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:24:57.958712   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:24:58.074716   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:24:58.228971   32020 docker.go:233] disabling docker service ...
	I1028 17:24:58.229043   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:24:58.242560   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:24:58.255313   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:24:58.370441   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:24:58.483893   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:24:58.497247   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:24:58.514703   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:24:58.514757   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.524413   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:24:58.524490   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.534125   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.543414   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.553077   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:24:58.562606   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.572154   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.588419   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.597992   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:24:58.606565   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:24:58.606613   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:24:58.618268   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:24:58.627230   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:24:58.734287   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:24:58.826354   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:24:58.826428   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:24:58.830997   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:24:58.831057   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:24:58.834579   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:24:58.876875   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:24:58.876953   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.903643   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.932572   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:24:58.933808   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:58.935970   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936230   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:58.936257   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936509   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:24:58.940296   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:24:58.952574   32020 kubeadm.go:883] updating cluster {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:24:58.952676   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:58.952732   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:24:58.984654   32020 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 17:24:58.984732   32020 ssh_runner.go:195] Run: which lz4
	I1028 17:24:58.988394   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 17:24:58.988478   32020 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 17:24:58.992506   32020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 17:24:58.992533   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 17:25:00.255551   32020 crio.go:462] duration metric: took 1.267100193s to copy over tarball
	I1028 17:25:00.255628   32020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 17:25:02.245448   32020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.989785325s)
	I1028 17:25:02.245479   32020 crio.go:469] duration metric: took 1.989902074s to extract the tarball
	I1028 17:25:02.245485   32020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 17:25:02.282635   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:25:02.327962   32020 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:25:02.327983   32020 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:25:02.327990   32020 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.2 crio true true} ...
	I1028 17:25:02.328079   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:02.328139   32020 ssh_runner.go:195] Run: crio config
	I1028 17:25:02.370696   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:02.370725   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:02.370738   32020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:25:02.370766   32020 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-381619 NodeName:ha-381619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:25:02.370888   32020 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-381619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:25:02.370908   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:02.370947   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:02.386589   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:02.386701   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:02.386768   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:02.396553   32020 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:25:02.396617   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 17:25:02.405738   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 17:25:02.421400   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:02.437117   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 17:25:02.452375   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 17:25:02.467922   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:02.471573   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:02.483093   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:02.609045   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:02.625565   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.230
	I1028 17:25:02.625588   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:02.625605   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.625774   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:02.625839   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:02.625856   32020 certs.go:256] generating profile certs ...
	I1028 17:25:02.625920   32020 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:02.625937   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt with IP's: []
	I1028 17:25:02.808278   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt ...
	I1028 17:25:02.808301   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt: {Name:mkc46e4b9b851301d42b46f45c8b044b11edfb36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808454   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key ...
	I1028 17:25:02.808464   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key: {Name:mkd681d3c01379608131f30441747317e91c7a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808570   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb
	I1028 17:25:02.808586   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.254]
	I1028 17:25:03.000249   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb ...
	I1028 17:25:03.000276   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb: {Name:mka7f7f8394389959cb184a46e51c1572954cddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000436   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb ...
	I1028 17:25:03.000449   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb: {Name:mk9ae1b9eef85a6c1bbc7739c982c84bfb111d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000555   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:03.000643   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:03.000695   32020 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:03.000710   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt with IP's: []
	I1028 17:25:03.126776   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt ...
	I1028 17:25:03.126802   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt: {Name:mk682452f5be7b32ad3e949275f7af954945db7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.126938   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key ...
	I1028 17:25:03.126948   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key: {Name:mk5feeb9713d67bfc630ef82b40280ce400bc4ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.127009   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:03.127027   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:03.127041   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:03.127053   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:03.127070   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:03.127083   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:03.127094   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:03.127106   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:03.127161   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:03.127194   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:03.127204   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:03.127228   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:03.127253   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:03.127274   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:03.127311   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:03.127335   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.127348   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.127360   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.127858   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:03.153264   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:03.175704   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:03.198131   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:03.220379   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 17:25:03.243352   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:25:03.265623   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:03.287951   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:03.312260   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:03.336494   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:03.363576   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:03.401524   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:25:03.430796   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:03.437428   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:03.448106   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452501   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452553   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.458194   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:03.468982   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:03.479358   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483520   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483564   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.488936   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:03.499033   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:03.509212   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513380   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513413   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.518680   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:03.528774   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:03.532547   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:03.532597   32020 kubeadm.go:392] StartCluster: {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:03.532684   32020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:25:03.532747   32020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:25:03.571597   32020 cri.go:89] found id: ""
	I1028 17:25:03.571655   32020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:25:03.581447   32020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:25:03.590775   32020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:25:03.599971   32020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:25:03.599983   32020 kubeadm.go:157] found existing configuration files:
	
	I1028 17:25:03.600011   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:25:03.608531   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:25:03.608565   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:25:03.617452   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:25:03.626079   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:25:03.626124   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:25:03.635124   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.644097   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:25:03.644143   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.653605   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:25:03.662453   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:25:03.662497   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:25:03.671488   32020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 17:25:03.865602   32020 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:25:14.531712   32020 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:25:14.531787   32020 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:25:14.531884   32020 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:25:14.532023   32020 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:25:14.532157   32020 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:25:14.532250   32020 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:25:14.533662   32020 out.go:235]   - Generating certificates and keys ...
	I1028 17:25:14.533743   32020 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:25:14.533841   32020 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:25:14.533931   32020 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:25:14.534016   32020 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:25:14.534080   32020 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:25:14.534133   32020 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:25:14.534179   32020 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:25:14.534283   32020 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534363   32020 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:25:14.534530   32020 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534620   32020 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:25:14.534728   32020 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:25:14.534800   32020 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:25:14.534868   32020 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:25:14.534934   32020 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:25:14.535013   32020 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:25:14.535092   32020 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:25:14.535200   32020 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:25:14.535281   32020 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:25:14.535399   32020 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:25:14.535478   32020 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:25:14.537017   32020 out.go:235]   - Booting up control plane ...
	I1028 17:25:14.537115   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:25:14.537184   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:25:14.537257   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:25:14.537408   32020 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:25:14.537527   32020 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:25:14.537591   32020 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:25:14.537728   32020 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:25:14.537862   32020 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:25:14.537919   32020 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001240837s
	I1028 17:25:14.537979   32020 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:25:14.538029   32020 kubeadm.go:310] [api-check] The API server is healthy after 5.745465318s
	I1028 17:25:14.538126   32020 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:25:14.538233   32020 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:25:14.538314   32020 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:25:14.538487   32020 kubeadm.go:310] [mark-control-plane] Marking the node ha-381619 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:25:14.538537   32020 kubeadm.go:310] [bootstrap-token] Using token: z48g6f.v3e9buj5ot2drke2
	I1028 17:25:14.539818   32020 out.go:235]   - Configuring RBAC rules ...
	I1028 17:25:14.539934   32020 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:25:14.540010   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:25:14.540140   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:25:14.540310   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:25:14.540484   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:25:14.540575   32020 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:25:14.540725   32020 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:25:14.540796   32020 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:25:14.540853   32020 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:25:14.540862   32020 kubeadm.go:310] 
	I1028 17:25:14.540934   32020 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:25:14.540941   32020 kubeadm.go:310] 
	I1028 17:25:14.541053   32020 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:25:14.541063   32020 kubeadm.go:310] 
	I1028 17:25:14.541098   32020 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:25:14.541149   32020 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:25:14.541207   32020 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:25:14.541220   32020 kubeadm.go:310] 
	I1028 17:25:14.541267   32020 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:25:14.541273   32020 kubeadm.go:310] 
	I1028 17:25:14.541311   32020 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:25:14.541317   32020 kubeadm.go:310] 
	I1028 17:25:14.541391   32020 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:25:14.541462   32020 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:25:14.541520   32020 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:25:14.541526   32020 kubeadm.go:310] 
	I1028 17:25:14.541594   32020 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:25:14.541676   32020 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:25:14.541684   32020 kubeadm.go:310] 
	I1028 17:25:14.541772   32020 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.541903   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 17:25:14.541939   32020 kubeadm.go:310] 	--control-plane 
	I1028 17:25:14.541952   32020 kubeadm.go:310] 
	I1028 17:25:14.542037   32020 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:25:14.542044   32020 kubeadm.go:310] 
	I1028 17:25:14.542111   32020 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.542209   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 17:25:14.542219   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:14.542223   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:14.543763   32020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 17:25:14.544966   32020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 17:25:14.550724   32020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 17:25:14.550742   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 17:25:14.570257   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 17:25:14.924676   32020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:25:14.924729   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:14.924751   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619 minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=true
	I1028 17:25:14.954780   32020 ops.go:34] apiserver oom_adj: -16
	I1028 17:25:15.130305   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:15.631369   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.131137   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.631423   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.131390   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.226452   32020 kubeadm.go:1113] duration metric: took 2.301774809s to wait for elevateKubeSystemPrivileges
	I1028 17:25:17.226483   32020 kubeadm.go:394] duration metric: took 13.693888567s to StartCluster
	I1028 17:25:17.226504   32020 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.226586   32020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.227504   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.227753   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:25:17.227749   32020 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:17.227776   32020 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 17:25:17.227845   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:25:17.227858   32020 addons.go:69] Setting storage-provisioner=true in profile "ha-381619"
	I1028 17:25:17.227896   32020 addons.go:234] Setting addon storage-provisioner=true in "ha-381619"
	I1028 17:25:17.227912   32020 addons.go:69] Setting default-storageclass=true in profile "ha-381619"
	I1028 17:25:17.227947   32020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-381619"
	I1028 17:25:17.228016   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:17.227925   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.228398   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228444   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.228490   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228533   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.243165   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I1028 17:25:17.243382   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I1028 17:25:17.243612   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.243827   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.244081   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244106   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244338   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244363   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244419   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244705   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244874   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.244986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.245028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.246886   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.247245   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 17:25:17.248034   32020 addons.go:234] Setting addon default-storageclass=true in "ha-381619"
	I1028 17:25:17.248080   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.248440   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.248495   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.248686   32020 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 17:25:17.259449   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I1028 17:25:17.259906   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.260429   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.260457   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.260757   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.260953   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.262554   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.262967   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I1028 17:25:17.263363   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.263726   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.263747   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.264078   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.264715   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.264763   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.264944   32020 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:25:17.266586   32020 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.266605   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:25:17.266623   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.269507   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.269884   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.269905   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.270038   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.270201   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.270351   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.270481   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.279872   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I1028 17:25:17.280334   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.280920   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.280938   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.281336   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.281528   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.283217   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.283405   32020 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.283421   32020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:25:17.283436   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.285906   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286319   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.286352   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286428   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.286601   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.286754   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.286885   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.359502   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:25:17.440263   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.482707   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.757670   32020 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 17:25:17.987134   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987176   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987203   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987222   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987446   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987453   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987512   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987532   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987544   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987486   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987487   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987697   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987716   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987723   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987752   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987764   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987811   32020 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 17:25:17.987831   32020 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 17:25:17.987933   32020 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 17:25:17.987946   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:17.987957   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:17.987961   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:17.988187   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.988302   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.988326   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.005294   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:25:18.006136   32020 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 17:25:18.006153   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:18.006163   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:18.006169   32020 round_trippers.go:473]     Content-Type: application/json
	I1028 17:25:18.006173   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:18.009564   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:25:18.009782   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:18.009793   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:18.010026   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:18.010041   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.010063   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:18.011483   32020 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 17:25:18.012573   32020 addons.go:510] duration metric: took 784.803587ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 17:25:18.012609   32020 start.go:246] waiting for cluster config update ...
	I1028 17:25:18.012623   32020 start.go:255] writing updated cluster config ...
	I1028 17:25:18.013902   32020 out.go:201] 
	I1028 17:25:18.015058   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:18.015120   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.016447   32020 out.go:177] * Starting "ha-381619-m02" control-plane node in "ha-381619" cluster
	I1028 17:25:18.017519   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:25:18.017534   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:25:18.017609   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:25:18.017619   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:25:18.017672   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.017831   32020 start.go:360] acquireMachinesLock for ha-381619-m02: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:25:18.017871   32020 start.go:364] duration metric: took 23.784µs to acquireMachinesLock for "ha-381619-m02"
	I1028 17:25:18.017886   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:18.017946   32020 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 17:25:18.019437   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:25:18.019500   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:18.019529   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:18.033319   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I1028 17:25:18.033727   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:18.034182   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:18.034200   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:18.034550   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:18.034715   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:18.034872   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:18.035033   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:25:18.035060   32020 client.go:168] LocalClient.Create starting
	I1028 17:25:18.035096   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:25:18.035126   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035142   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035187   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:25:18.035204   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035216   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035230   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:25:18.035237   32020 main.go:141] libmachine: (ha-381619-m02) Calling .PreCreateCheck
	I1028 17:25:18.035397   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:18.035746   32020 main.go:141] libmachine: Creating machine...
	I1028 17:25:18.035760   32020 main.go:141] libmachine: (ha-381619-m02) Calling .Create
	I1028 17:25:18.035901   32020 main.go:141] libmachine: (ha-381619-m02) Creating KVM machine...
	I1028 17:25:18.037157   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing default KVM network
	I1028 17:25:18.037313   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing private KVM network mk-ha-381619
	I1028 17:25:18.037431   32020 main.go:141] libmachine: (ha-381619-m02) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.037482   32020 main.go:141] libmachine: (ha-381619-m02) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:25:18.037542   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.037441   32379 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.037604   32020 main.go:141] libmachine: (ha-381619-m02) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:25:18.305482   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.305364   32379 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa...
	I1028 17:25:18.398014   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.397913   32379 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk...
	I1028 17:25:18.398067   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing magic tar header
	I1028 17:25:18.398088   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing SSH key tar header
	I1028 17:25:18.398095   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.398018   32379 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.398114   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02
	I1028 17:25:18.398136   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:25:18.398156   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 (perms=drwx------)
	I1028 17:25:18.398166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.398180   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:25:18.398187   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:25:18.398194   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:25:18.398201   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home
	I1028 17:25:18.398207   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Skipping /home - not owner
	I1028 17:25:18.398217   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:25:18.398254   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:25:18.398277   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:25:18.398289   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:25:18.398304   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:25:18.398338   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:18.399119   32020 main.go:141] libmachine: (ha-381619-m02) define libvirt domain using xml: 
	I1028 17:25:18.399128   32020 main.go:141] libmachine: (ha-381619-m02) <domain type='kvm'>
	I1028 17:25:18.399133   32020 main.go:141] libmachine: (ha-381619-m02)   <name>ha-381619-m02</name>
	I1028 17:25:18.399138   32020 main.go:141] libmachine: (ha-381619-m02)   <memory unit='MiB'>2200</memory>
	I1028 17:25:18.399142   32020 main.go:141] libmachine: (ha-381619-m02)   <vcpu>2</vcpu>
	I1028 17:25:18.399146   32020 main.go:141] libmachine: (ha-381619-m02)   <features>
	I1028 17:25:18.399154   32020 main.go:141] libmachine: (ha-381619-m02)     <acpi/>
	I1028 17:25:18.399160   32020 main.go:141] libmachine: (ha-381619-m02)     <apic/>
	I1028 17:25:18.399167   32020 main.go:141] libmachine: (ha-381619-m02)     <pae/>
	I1028 17:25:18.399171   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399177   32020 main.go:141] libmachine: (ha-381619-m02)   </features>
	I1028 17:25:18.399183   32020 main.go:141] libmachine: (ha-381619-m02)   <cpu mode='host-passthrough'>
	I1028 17:25:18.399188   32020 main.go:141] libmachine: (ha-381619-m02)   
	I1028 17:25:18.399194   32020 main.go:141] libmachine: (ha-381619-m02)   </cpu>
	I1028 17:25:18.399199   32020 main.go:141] libmachine: (ha-381619-m02)   <os>
	I1028 17:25:18.399206   32020 main.go:141] libmachine: (ha-381619-m02)     <type>hvm</type>
	I1028 17:25:18.399211   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='cdrom'/>
	I1028 17:25:18.399223   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='hd'/>
	I1028 17:25:18.399234   32020 main.go:141] libmachine: (ha-381619-m02)     <bootmenu enable='no'/>
	I1028 17:25:18.399255   32020 main.go:141] libmachine: (ha-381619-m02)   </os>
	I1028 17:25:18.399268   32020 main.go:141] libmachine: (ha-381619-m02)   <devices>
	I1028 17:25:18.399274   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='cdrom'>
	I1028 17:25:18.399282   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/boot2docker.iso'/>
	I1028 17:25:18.399289   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hdc' bus='scsi'/>
	I1028 17:25:18.399293   32020 main.go:141] libmachine: (ha-381619-m02)       <readonly/>
	I1028 17:25:18.399299   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399305   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='disk'>
	I1028 17:25:18.399316   32020 main.go:141] libmachine: (ha-381619-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:25:18.399348   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk'/>
	I1028 17:25:18.399365   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hda' bus='virtio'/>
	I1028 17:25:18.399403   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399425   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399439   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='mk-ha-381619'/>
	I1028 17:25:18.399446   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399454   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399464   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399473   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='default'/>
	I1028 17:25:18.399483   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399491   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399505   32020 main.go:141] libmachine: (ha-381619-m02)     <serial type='pty'>
	I1028 17:25:18.399516   32020 main.go:141] libmachine: (ha-381619-m02)       <target port='0'/>
	I1028 17:25:18.399525   32020 main.go:141] libmachine: (ha-381619-m02)     </serial>
	I1028 17:25:18.399531   32020 main.go:141] libmachine: (ha-381619-m02)     <console type='pty'>
	I1028 17:25:18.399536   32020 main.go:141] libmachine: (ha-381619-m02)       <target type='serial' port='0'/>
	I1028 17:25:18.399544   32020 main.go:141] libmachine: (ha-381619-m02)     </console>
	I1028 17:25:18.399554   32020 main.go:141] libmachine: (ha-381619-m02)     <rng model='virtio'>
	I1028 17:25:18.399564   32020 main.go:141] libmachine: (ha-381619-m02)       <backend model='random'>/dev/random</backend>
	I1028 17:25:18.399578   32020 main.go:141] libmachine: (ha-381619-m02)     </rng>
	I1028 17:25:18.399588   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399596   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399604   32020 main.go:141] libmachine: (ha-381619-m02)   </devices>
	I1028 17:25:18.399613   32020 main.go:141] libmachine: (ha-381619-m02) </domain>
	I1028 17:25:18.399622   32020 main.go:141] libmachine: (ha-381619-m02) 
	I1028 17:25:18.405867   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:26:9b:68 in network default
	I1028 17:25:18.406379   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring networks are active...
	I1028 17:25:18.406395   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:18.407090   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network default is active
	I1028 17:25:18.407385   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network mk-ha-381619 is active
	I1028 17:25:18.407717   32020 main.go:141] libmachine: (ha-381619-m02) Getting domain xml...
	I1028 17:25:18.408378   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:19.597563   32020 main.go:141] libmachine: (ha-381619-m02) Waiting to get IP...
	I1028 17:25:19.598384   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.598740   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.598789   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.598740   32379 retry.go:31] will retry after 190.903064ms: waiting for machine to come up
	I1028 17:25:19.791078   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.791557   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.791589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.791498   32379 retry.go:31] will retry after 306.415198ms: waiting for machine to come up
	I1028 17:25:20.099990   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.100410   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.100438   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.100363   32379 retry.go:31] will retry after 461.052427ms: waiting for machine to come up
	I1028 17:25:20.562787   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.563226   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.563254   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.563181   32379 retry.go:31] will retry after 399.454176ms: waiting for machine to come up
	I1028 17:25:20.964734   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.965138   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.965168   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.965088   32379 retry.go:31] will retry after 468.537228ms: waiting for machine to come up
	I1028 17:25:21.435633   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:21.436036   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:21.436065   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:21.435978   32379 retry.go:31] will retry after 901.623232ms: waiting for machine to come up
	I1028 17:25:22.338882   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:22.339214   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:22.339251   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:22.339170   32379 retry.go:31] will retry after 1.174231376s: waiting for machine to come up
	I1028 17:25:23.514567   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:23.515122   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:23.515148   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:23.515075   32379 retry.go:31] will retry after 1.47285995s: waiting for machine to come up
	I1028 17:25:24.989376   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:24.989742   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:24.989772   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:24.989693   32379 retry.go:31] will retry after 1.395202662s: waiting for machine to come up
	I1028 17:25:26.387051   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:26.387470   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:26.387497   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:26.387419   32379 retry.go:31] will retry after 1.648219706s: waiting for machine to come up
	I1028 17:25:28.036842   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:28.037349   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:28.037375   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:28.037295   32379 retry.go:31] will retry after 2.189322328s: waiting for machine to come up
	I1028 17:25:30.229493   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:30.229820   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:30.229841   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:30.229780   32379 retry.go:31] will retry after 2.90274213s: waiting for machine to come up
	I1028 17:25:33.134730   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:33.135076   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:33.135092   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:33.135034   32379 retry.go:31] will retry after 4.079584337s: waiting for machine to come up
	I1028 17:25:37.219140   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:37.219485   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:37.219505   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:37.219442   32379 retry.go:31] will retry after 4.856708442s: waiting for machine to come up
	I1028 17:25:42.077346   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077745   32020 main.go:141] libmachine: (ha-381619-m02) Found IP for machine: 192.168.39.171
	I1028 17:25:42.077766   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has current primary IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077785   32020 main.go:141] libmachine: (ha-381619-m02) Reserving static IP address...
	I1028 17:25:42.078069   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "ha-381619-m02", mac: "52:54:00:ab:1d:c9", ip: "192.168.39.171"} in network mk-ha-381619
	I1028 17:25:42.145216   32020 main.go:141] libmachine: (ha-381619-m02) Reserved static IP address: 192.168.39.171
	I1028 17:25:42.145248   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:42.145256   32020 main.go:141] libmachine: (ha-381619-m02) Waiting for SSH to be available...
	I1028 17:25:42.147449   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.147844   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619
	I1028 17:25:42.147868   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:ab:1d:c9
	I1028 17:25:42.148011   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:42.148037   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:42.148079   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:42.148093   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:42.148106   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:42.151405   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:25:42.151422   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:25:42.151430   32020 main.go:141] libmachine: (ha-381619-m02) DBG | command : exit 0
	I1028 17:25:42.151434   32020 main.go:141] libmachine: (ha-381619-m02) DBG | err     : exit status 255
	I1028 17:25:42.151457   32020 main.go:141] libmachine: (ha-381619-m02) DBG | output  : 
	I1028 17:25:45.153548   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:45.155666   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156001   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.156026   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156153   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:45.156174   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:45.156209   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:45.156220   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:45.156228   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:45.284123   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 17:25:45.284412   32020 main.go:141] libmachine: (ha-381619-m02) KVM machine creation complete!
	I1028 17:25:45.284721   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:45.285293   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285476   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285636   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:25:45.285651   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetState
	I1028 17:25:45.286839   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:25:45.286853   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:25:45.286874   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:25:45.286883   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.289343   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289699   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.289732   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289877   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.290050   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290180   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290283   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.290450   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.290659   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.290673   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:25:45.403429   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.403453   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:25:45.403460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.406169   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406520   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.406547   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406664   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.406833   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.406968   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.407121   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.407274   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.407471   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.407486   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:25:45.516915   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:25:45.516972   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:25:45.516982   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:25:45.516996   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517247   32020 buildroot.go:166] provisioning hostname "ha-381619-m02"
	I1028 17:25:45.517269   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.520442   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.520895   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.520951   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.521136   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.521306   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521441   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521550   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.521679   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.521869   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.521885   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m02 && echo "ha-381619-m02" | sudo tee /etc/hostname
	I1028 17:25:45.647896   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m02
	
	I1028 17:25:45.647923   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.650559   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.650915   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.650946   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.651119   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.651299   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651606   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.651778   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.651948   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.651967   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:25:45.773264   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.773293   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:25:45.773315   32020 buildroot.go:174] setting up certificates
	I1028 17:25:45.773322   32020 provision.go:84] configureAuth start
	I1028 17:25:45.773330   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.773552   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:45.776602   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.776920   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.776944   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.777092   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.779167   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779415   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.779440   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779566   32020 provision.go:143] copyHostCerts
	I1028 17:25:45.779590   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779620   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:25:45.779629   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779712   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:25:45.779784   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779808   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:25:45.779815   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779839   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:25:45.779883   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779899   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:25:45.779905   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779925   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:25:45.779969   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m02 san=[127.0.0.1 192.168.39.171 ha-381619-m02 localhost minikube]
	I1028 17:25:45.949948   32020 provision.go:177] copyRemoteCerts
	I1028 17:25:45.950001   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:25:45.950022   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.952596   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.952955   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.953006   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.953158   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.953335   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.953473   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.953584   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.038279   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:25:46.038337   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:25:46.061947   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:25:46.062008   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:25:46.084393   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:25:46.084451   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:25:46.107114   32020 provision.go:87] duration metric: took 333.781683ms to configureAuth
	I1028 17:25:46.107142   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:25:46.107303   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:46.107385   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.110324   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110650   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.110678   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110841   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.111029   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111171   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111337   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.111521   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.111668   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.111682   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:25:46.333665   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:25:46.333687   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:25:46.333695   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetURL
	I1028 17:25:46.335063   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using libvirt version 6000000
	I1028 17:25:46.337491   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.337821   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.337850   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.338022   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:25:46.338038   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:25:46.338046   32020 client.go:171] duration metric: took 28.302974924s to LocalClient.Create
	I1028 17:25:46.338089   32020 start.go:167] duration metric: took 28.303046594s to libmachine.API.Create "ha-381619"
	I1028 17:25:46.338103   32020 start.go:293] postStartSetup for "ha-381619-m02" (driver="kvm2")
	I1028 17:25:46.338115   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:25:46.338137   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.338375   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:25:46.338401   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.340858   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341271   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.341298   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.341568   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.341713   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.341825   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.426689   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:25:46.431014   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:25:46.431038   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:25:46.431111   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:25:46.431208   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:25:46.431224   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:25:46.431391   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:25:46.440073   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:46.463120   32020 start.go:296] duration metric: took 125.005816ms for postStartSetup
	I1028 17:25:46.463168   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:46.463762   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.466198   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466494   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.466531   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466725   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:46.466921   32020 start.go:128] duration metric: took 28.448963909s to createHost
	I1028 17:25:46.466949   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.469249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469565   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.469589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469704   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.469861   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.469984   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.470143   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.470307   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.470485   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.470498   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:25:46.580856   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136346.562587281
	
	I1028 17:25:46.580878   32020 fix.go:216] guest clock: 1730136346.562587281
	I1028 17:25:46.580887   32020 fix.go:229] Guest: 2024-10-28 17:25:46.562587281 +0000 UTC Remote: 2024-10-28 17:25:46.466934782 +0000 UTC m=+73.797903078 (delta=95.652499ms)
	I1028 17:25:46.580901   32020 fix.go:200] guest clock delta is within tolerance: 95.652499ms
	I1028 17:25:46.580907   32020 start.go:83] releasing machines lock for "ha-381619-m02", held for 28.563026837s
	I1028 17:25:46.580924   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.581186   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.583856   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.584218   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.584249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.586494   32020 out.go:177] * Found network options:
	I1028 17:25:46.587894   32020 out.go:177]   - NO_PROXY=192.168.39.230
	W1028 17:25:46.589029   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589070   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589532   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589694   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589788   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:25:46.589827   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	W1028 17:25:46.589854   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589924   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:25:46.589942   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.592456   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592681   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592853   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.592873   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592998   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593129   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.593189   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.593257   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593327   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593495   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593488   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.593663   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593796   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.834104   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:25:46.840249   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:25:46.840309   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:25:46.857442   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:25:46.857462   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:25:46.857520   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:25:46.874062   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:25:46.887622   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:25:46.887678   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:25:46.901054   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:25:46.914614   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:25:47.030203   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:25:47.173397   32020 docker.go:233] disabling docker service ...
	I1028 17:25:47.173471   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:25:47.187602   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:25:47.200124   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:25:47.343002   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:25:47.463446   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:25:47.477391   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:25:47.495284   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:25:47.495336   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.505232   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:25:47.505290   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.515205   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.524903   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.534665   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:25:47.544548   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.554185   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.570492   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.580150   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:25:47.588959   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:25:47.588998   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:25:47.602144   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:25:47.611274   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:47.728237   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:25:47.819661   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:25:47.819739   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:25:47.825086   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:25:47.825133   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:25:47.828919   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:25:47.865608   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:25:47.865686   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.891971   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.920487   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:25:47.921941   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:25:47.923245   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:47.926002   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926296   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:47.926314   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926539   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:25:47.930572   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:47.943132   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:25:47.943291   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:47.943533   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.943566   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.957947   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I1028 17:25:47.958254   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.958709   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.958727   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.959022   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.959199   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:47.960488   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:47.960756   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.960791   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.974636   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I1028 17:25:47.975037   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.975478   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.975496   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.975773   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.975952   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:47.976140   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.171
	I1028 17:25:47.976153   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:47.976170   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:47.976307   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:47.976364   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:47.976377   32020 certs.go:256] generating profile certs ...
	I1028 17:25:47.976489   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:47.976518   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6
	I1028 17:25:47.976537   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.254]
	I1028 17:25:48.173298   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 ...
	I1028 17:25:48.173326   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6: {Name:mkf5ce350ef4737e80e11fe080b891074a0af9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173482   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 ...
	I1028 17:25:48.173493   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6: {Name:mk4892e87f7052cc8a58e00369d3170cecec3e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173560   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:48.173681   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:48.173810   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:48.173826   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:48.173840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:48.173854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:48.173866   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:48.173879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:48.173891   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:48.173902   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:48.173913   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:48.173957   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:48.173999   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:48.174009   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:48.174030   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:48.174051   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:48.174071   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:48.174117   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:48.174144   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.174158   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.174169   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.174198   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:48.177148   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177545   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:48.177579   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177737   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:48.177910   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:48.178048   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:48.178158   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:48.248817   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:25:48.254098   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:25:48.264499   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:25:48.268575   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:25:48.278929   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:25:48.283180   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:25:48.292856   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:25:48.296876   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:25:48.306132   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:25:48.310003   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:25:48.319418   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:25:48.323887   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:25:48.335408   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:48.360541   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:48.384095   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:48.407120   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:48.429601   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 17:25:48.452108   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:25:48.474717   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:48.497519   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:48.519884   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:48.542530   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:48.565246   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:48.587411   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:25:48.603353   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:25:48.618794   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:25:48.634198   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:25:48.649902   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:25:48.665540   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:25:48.680907   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:25:48.697446   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:48.703204   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:48.713589   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718016   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718162   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.723740   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:48.734297   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:48.744539   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748653   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748709   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.754164   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:48.764209   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:48.774379   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778691   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778734   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.784288   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:48.794987   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:48.799006   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:48.799053   32020 kubeadm.go:934] updating node {m02 192.168.39.171 8443 v1.31.2 crio true true} ...
	I1028 17:25:48.799121   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:48.799142   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:48.799168   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:48.823470   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:48.823527   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:48.823569   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.835145   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:25:48.835188   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.844460   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:25:48.844491   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844545   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844552   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 17:25:48.844586   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 17:25:48.848931   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:25:48.848960   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:25:49.845765   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.845846   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.851022   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:25:49.851049   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:25:49.995196   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:25:50.018003   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.018112   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.028108   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:25:50.028154   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:25:50.413235   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:25:50.422462   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 17:25:50.439863   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:50.457114   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:25:50.474256   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:50.477946   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:50.489942   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:50.615829   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:50.634721   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:50.635033   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:50.635082   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:50.649391   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I1028 17:25:50.649767   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:50.650191   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:50.650209   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:50.650503   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:50.650660   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:50.650788   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:50.650874   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:25:50.650889   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:50.653655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654061   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:50.654087   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654224   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:50.654401   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:50.654535   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:50.654636   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:50.789658   32020 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:50.789699   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443"
	I1028 17:26:12.167714   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443": (21.377987897s)
	I1028 17:26:12.167759   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:26:12.604075   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m02 minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:26:12.730286   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:26:12.839048   32020 start.go:319] duration metric: took 22.188254958s to joinCluster
	I1028 17:26:12.839133   32020 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:12.839439   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:12.840330   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:26:12.841472   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:26:13.041048   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:26:13.058928   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:26:13.059251   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:26:13.059331   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:26:13.059574   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:13.059667   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.059677   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.059688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.059694   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.077343   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:26:13.560169   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.560188   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.560196   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.560200   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.573882   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:14.060794   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.060818   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.060828   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.060835   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.068335   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:14.560535   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.560554   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.560562   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.560567   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.564008   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:15.060016   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.060055   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.060066   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.060072   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.064096   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:15.064637   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:15.559999   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.560030   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.560041   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.560046   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.563431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.059828   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.059852   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.059862   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.059867   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.063732   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.560697   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.560722   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.560733   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.560739   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.564261   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:17.060671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.060698   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.060711   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.060718   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.064995   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:17.066041   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:17.560713   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.560732   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.560749   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.563531   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:18.060093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.060116   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.060127   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.060135   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.064122   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:18.559857   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.559879   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.559887   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.559898   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.563832   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:19.059842   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.059867   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.059879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.059884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.065030   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:19.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.559871   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.559879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.559884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.562800   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:19.563587   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:20.059873   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.059895   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.059905   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.059912   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.073315   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:20.560212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.560231   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.560239   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.560243   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.650492   32020 round_trippers.go:574] Response Status: 200 OK in 90 milliseconds
	I1028 17:26:21.059937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.059963   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.059974   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.059979   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.064508   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:21.560559   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.560581   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.560590   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.560594   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.563714   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:21.564443   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:22.059724   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.059744   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.059752   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.059757   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.063391   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:22.560710   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.560731   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.560738   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.563846   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.060524   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.060544   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.060554   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.060561   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.064448   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.560417   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.560438   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.560447   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.560451   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.563535   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.060636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.060664   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.060675   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.060683   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.064043   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.064451   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:24.559868   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.559899   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.559907   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.559910   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.562925   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:25.059880   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.059902   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.059910   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.059915   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.063972   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:25.559872   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.559894   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.559901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.559905   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.563081   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:26.060748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.060770   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.060782   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.060788   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.064990   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:26.065576   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:26.559841   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.559863   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.559871   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.559876   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.562740   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:27.059746   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.059768   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.059775   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.059779   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.063135   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:27.560126   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.560145   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.560153   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.560158   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.563096   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:28.060723   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.060746   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.060757   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.060763   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.065003   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:28.560732   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.560757   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.560767   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.560774   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.563965   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:28.564617   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:29.059876   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.059903   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.059914   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.059919   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.067282   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:29.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.559872   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.559880   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.559883   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.562804   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:30.059831   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.059853   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.059867   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.059875   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.063855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:30.560631   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.560653   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.560665   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.560670   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.563630   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:31.059907   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.059925   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.059933   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.059938   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.064319   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:31.065078   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:31.560248   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.560271   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.560278   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.560282   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.563146   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:32.059755   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.059779   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.059790   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.059796   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.065145   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:32.560006   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.560026   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.560034   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.560038   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.563453   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.060614   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.060633   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.060641   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.060647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.064544   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.066373   32020 node_ready.go:49] node "ha-381619-m02" has status "Ready":"True"
	I1028 17:26:33.066389   32020 node_ready.go:38] duration metric: took 20.006796944s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:33.066397   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:33.066462   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:33.066470   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.066477   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.066482   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.074203   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:33.082515   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.082586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:26:33.082595   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.082602   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.082607   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.095144   32020 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 17:26:33.095832   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.095846   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.095854   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.095858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.101134   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:33.101733   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.101757   32020 pod_ready.go:82] duration metric: took 19.21928ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101770   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101833   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:26:33.101844   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.101853   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.101858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.105945   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.108337   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.108355   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.108367   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.108372   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.113026   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.113662   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.113683   32020 pod_ready.go:82] duration metric: took 11.906137ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113694   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113752   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:26:33.113762   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.113774   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.113782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.123002   32020 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 17:26:33.123632   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.123647   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.123654   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.123658   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.127965   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.128570   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.128593   32020 pod_ready.go:82] duration metric: took 14.890353ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128604   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128669   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:26:33.128680   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.128690   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.128695   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.132736   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.133266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.133282   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.133291   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.133297   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.135365   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.135735   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.135750   32020 pod_ready.go:82] duration metric: took 7.136636ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.135762   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.261122   32020 request.go:632] Waited for 125.293136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261209   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261217   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.261226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.261234   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.263967   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.461031   32020 request.go:632] Waited for 196.380501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461114   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461126   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.461137   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.461148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.465245   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.465839   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.465854   32020 pod_ready.go:82] duration metric: took 330.085581ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.465863   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.661130   32020 request.go:632] Waited for 195.210858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661218   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.661226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.661231   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.664592   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.861613   32020 request.go:632] Waited for 196.398754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861693   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.861703   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.861708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.865300   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.865923   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.865943   32020 pod_ready.go:82] duration metric: took 400.074085ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.865954   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.061082   32020 request.go:632] Waited for 195.035949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061146   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061154   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.061164   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.061177   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.065243   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:34.261295   32020 request.go:632] Waited for 195.377372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261362   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261369   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.261377   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.261384   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.264122   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:34.264806   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.264824   32020 pod_ready.go:82] duration metric: took 398.860925ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.264834   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.461015   32020 request.go:632] Waited for 196.107238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461086   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461092   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.461099   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.461107   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.464532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.661679   32020 request.go:632] Waited for 196.369344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661755   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.661763   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.661769   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.664905   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.665450   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.665471   32020 pod_ready.go:82] duration metric: took 400.628457ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.665485   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.861555   32020 request.go:632] Waited for 195.998426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861607   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861612   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.861619   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.861625   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.865054   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.061002   32020 request.go:632] Waited for 195.260133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061074   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061081   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.061090   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.061103   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.067316   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:35.067855   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.067872   32020 pod_ready.go:82] duration metric: took 402.381503ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.067883   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.261021   32020 request.go:632] Waited for 193.06469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261075   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261080   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.261087   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.261091   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.264532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.461647   32020 request.go:632] Waited for 196.379594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461699   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461704   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.461712   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.461716   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.464708   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:35.465310   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.465326   32020 pod_ready.go:82] duration metric: took 397.438256ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.465336   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.660832   32020 request.go:632] Waited for 195.429914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660887   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660892   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.660901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.660906   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.664825   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.861091   32020 request.go:632] Waited for 195.400527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861176   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861185   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.861193   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.861199   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.864874   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.865496   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.865512   32020 pod_ready.go:82] duration metric: took 400.170514ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.865524   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.061640   32020 request.go:632] Waited for 196.040174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061702   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.061709   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.061712   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.067912   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:36.260741   32020 request.go:632] Waited for 192.270672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260796   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260801   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.260808   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.260811   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.264431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.265062   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:36.265078   32020 pod_ready.go:82] duration metric: took 399.548106ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.265089   32020 pod_ready.go:39] duration metric: took 3.19868237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:36.265105   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:26:36.265162   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:26:36.280395   32020 api_server.go:72] duration metric: took 23.441229274s to wait for apiserver process to appear ...
	I1028 17:26:36.280422   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:26:36.280444   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:26:36.284951   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:26:36.285015   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:26:36.285023   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.285030   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.285034   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.285954   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:26:36.286036   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:26:36.286049   32020 api_server.go:131] duration metric: took 5.621129ms to wait for apiserver health ...
	I1028 17:26:36.286055   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:26:36.461480   32020 request.go:632] Waited for 175.36266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461560   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461566   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.461573   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.461579   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.465870   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.471332   32020 system_pods.go:59] 17 kube-system pods found
	I1028 17:26:36.471364   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.471372   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.471378   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.471384   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.471389   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.471394   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.471398   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.471404   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.471410   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.471415   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.471420   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.471423   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.471427   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.471431   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.471439   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.471443   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.471447   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.471452   32020 system_pods.go:74] duration metric: took 185.392371ms to wait for pod list to return data ...
	I1028 17:26:36.471461   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:26:36.660798   32020 request.go:632] Waited for 189.265217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660858   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660865   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.660876   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.660890   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.664250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.664492   32020 default_sa.go:45] found service account: "default"
	I1028 17:26:36.664512   32020 default_sa.go:55] duration metric: took 193.044588ms for default service account to be created ...
	I1028 17:26:36.664525   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:26:36.860686   32020 request.go:632] Waited for 196.070222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860774   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860785   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.860796   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.860806   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.865017   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.869263   32020 system_pods.go:86] 17 kube-system pods found
	I1028 17:26:36.869283   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.869289   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.869294   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.869300   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.869305   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.869318   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.869324   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.869332   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.869341   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.869344   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.869348   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.869351   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.869355   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.869359   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.869362   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.869368   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.869371   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.869378   32020 system_pods.go:126] duration metric: took 204.847439ms to wait for k8s-apps to be running ...
	I1028 17:26:36.869387   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:26:36.869438   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:26:36.887558   32020 system_svc.go:56] duration metric: took 18.164041ms WaitForService to wait for kubelet
	I1028 17:26:36.887583   32020 kubeadm.go:582] duration metric: took 24.048418465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:26:36.887603   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:26:37.061041   32020 request.go:632] Waited for 173.358173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061125   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061137   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:37.061147   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:37.061157   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:37.065908   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:37.066717   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066739   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066750   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066754   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066758   32020 node_conditions.go:105] duration metric: took 179.146781ms to run NodePressure ...
	I1028 17:26:37.066780   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:26:37.066813   32020 start.go:255] writing updated cluster config ...
	I1028 17:26:37.068764   32020 out.go:201] 
	I1028 17:26:37.070024   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:37.070105   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.071682   32020 out.go:177] * Starting "ha-381619-m03" control-plane node in "ha-381619" cluster
	I1028 17:26:37.072951   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:26:37.072974   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:26:37.073061   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:26:37.073071   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:26:37.073157   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.073328   32020 start.go:360] acquireMachinesLock for ha-381619-m03: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:26:37.073367   32020 start.go:364] duration metric: took 22.448µs to acquireMachinesLock for "ha-381619-m03"
	I1028 17:26:37.073383   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:37.073468   32020 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 17:26:37.074992   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:26:37.075063   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:26:37.075098   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:26:37.089635   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I1028 17:26:37.090045   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:26:37.090591   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:26:37.090617   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:26:37.090932   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:26:37.091131   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:26:37.091290   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:26:37.091438   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:26:37.091470   32020 client.go:168] LocalClient.Create starting
	I1028 17:26:37.091506   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:26:37.091543   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091562   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091624   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:26:37.091649   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091665   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091691   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:26:37.091702   32020 main.go:141] libmachine: (ha-381619-m03) Calling .PreCreateCheck
	I1028 17:26:37.091853   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:26:37.092216   32020 main.go:141] libmachine: Creating machine...
	I1028 17:26:37.092231   32020 main.go:141] libmachine: (ha-381619-m03) Calling .Create
	I1028 17:26:37.092346   32020 main.go:141] libmachine: (ha-381619-m03) Creating KVM machine...
	I1028 17:26:37.093689   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing default KVM network
	I1028 17:26:37.093825   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing private KVM network mk-ha-381619
	I1028 17:26:37.094015   32020 main.go:141] libmachine: (ha-381619-m03) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.094041   32020 main.go:141] libmachine: (ha-381619-m03) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:26:37.094128   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.093979   32807 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.094183   32020 main.go:141] libmachine: (ha-381619-m03) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:26:37.334476   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.334350   32807 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa...
	I1028 17:26:37.512343   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512238   32807 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk...
	I1028 17:26:37.512368   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing magic tar header
	I1028 17:26:37.512408   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing SSH key tar header
	I1028 17:26:37.512432   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512349   32807 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.512450   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03
	I1028 17:26:37.512458   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:26:37.512478   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.512486   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:26:37.512517   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:26:37.512536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:26:37.512545   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 (perms=drwx------)
	I1028 17:26:37.512553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home
	I1028 17:26:37.512565   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:26:37.512581   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:26:37.512594   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:26:37.512609   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:26:37.512619   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Skipping /home - not owner
	I1028 17:26:37.512629   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:26:37.512638   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:37.513512   32020 main.go:141] libmachine: (ha-381619-m03) define libvirt domain using xml: 
	I1028 17:26:37.513530   32020 main.go:141] libmachine: (ha-381619-m03) <domain type='kvm'>
	I1028 17:26:37.513546   32020 main.go:141] libmachine: (ha-381619-m03)   <name>ha-381619-m03</name>
	I1028 17:26:37.513552   32020 main.go:141] libmachine: (ha-381619-m03)   <memory unit='MiB'>2200</memory>
	I1028 17:26:37.513557   32020 main.go:141] libmachine: (ha-381619-m03)   <vcpu>2</vcpu>
	I1028 17:26:37.513561   32020 main.go:141] libmachine: (ha-381619-m03)   <features>
	I1028 17:26:37.513566   32020 main.go:141] libmachine: (ha-381619-m03)     <acpi/>
	I1028 17:26:37.513572   32020 main.go:141] libmachine: (ha-381619-m03)     <apic/>
	I1028 17:26:37.513577   32020 main.go:141] libmachine: (ha-381619-m03)     <pae/>
	I1028 17:26:37.513584   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513589   32020 main.go:141] libmachine: (ha-381619-m03)   </features>
	I1028 17:26:37.513595   32020 main.go:141] libmachine: (ha-381619-m03)   <cpu mode='host-passthrough'>
	I1028 17:26:37.513600   32020 main.go:141] libmachine: (ha-381619-m03)   
	I1028 17:26:37.513606   32020 main.go:141] libmachine: (ha-381619-m03)   </cpu>
	I1028 17:26:37.513611   32020 main.go:141] libmachine: (ha-381619-m03)   <os>
	I1028 17:26:37.513617   32020 main.go:141] libmachine: (ha-381619-m03)     <type>hvm</type>
	I1028 17:26:37.513622   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='cdrom'/>
	I1028 17:26:37.513630   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='hd'/>
	I1028 17:26:37.513634   32020 main.go:141] libmachine: (ha-381619-m03)     <bootmenu enable='no'/>
	I1028 17:26:37.513638   32020 main.go:141] libmachine: (ha-381619-m03)   </os>
	I1028 17:26:37.513643   32020 main.go:141] libmachine: (ha-381619-m03)   <devices>
	I1028 17:26:37.513647   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='cdrom'>
	I1028 17:26:37.513655   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/boot2docker.iso'/>
	I1028 17:26:37.513660   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hdc' bus='scsi'/>
	I1028 17:26:37.513664   32020 main.go:141] libmachine: (ha-381619-m03)       <readonly/>
	I1028 17:26:37.513668   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513673   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='disk'>
	I1028 17:26:37.513679   32020 main.go:141] libmachine: (ha-381619-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:26:37.513689   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk'/>
	I1028 17:26:37.513697   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hda' bus='virtio'/>
	I1028 17:26:37.513728   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513752   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513762   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='mk-ha-381619'/>
	I1028 17:26:37.513777   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513799   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513818   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513832   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='default'/>
	I1028 17:26:37.513842   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513850   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513860   32020 main.go:141] libmachine: (ha-381619-m03)     <serial type='pty'>
	I1028 17:26:37.513868   32020 main.go:141] libmachine: (ha-381619-m03)       <target port='0'/>
	I1028 17:26:37.513877   32020 main.go:141] libmachine: (ha-381619-m03)     </serial>
	I1028 17:26:37.513888   32020 main.go:141] libmachine: (ha-381619-m03)     <console type='pty'>
	I1028 17:26:37.513899   32020 main.go:141] libmachine: (ha-381619-m03)       <target type='serial' port='0'/>
	I1028 17:26:37.513908   32020 main.go:141] libmachine: (ha-381619-m03)     </console>
	I1028 17:26:37.513919   32020 main.go:141] libmachine: (ha-381619-m03)     <rng model='virtio'>
	I1028 17:26:37.513932   32020 main.go:141] libmachine: (ha-381619-m03)       <backend model='random'>/dev/random</backend>
	I1028 17:26:37.513941   32020 main.go:141] libmachine: (ha-381619-m03)     </rng>
	I1028 17:26:37.513954   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513965   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513971   32020 main.go:141] libmachine: (ha-381619-m03)   </devices>
	I1028 17:26:37.513978   32020 main.go:141] libmachine: (ha-381619-m03) </domain>
	I1028 17:26:37.513992   32020 main.go:141] libmachine: (ha-381619-m03) 
	I1028 17:26:37.520796   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:6b:b8:f1 in network default
	I1028 17:26:37.521360   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring networks are active...
	I1028 17:26:37.521387   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:37.521985   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network default is active
	I1028 17:26:37.522251   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network mk-ha-381619 is active
	I1028 17:26:37.522562   32020 main.go:141] libmachine: (ha-381619-m03) Getting domain xml...
	I1028 17:26:37.523108   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:38.733507   32020 main.go:141] libmachine: (ha-381619-m03) Waiting to get IP...
	I1028 17:26:38.734445   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:38.734847   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:38.734874   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:38.734831   32807 retry.go:31] will retry after 277.511241ms: waiting for machine to come up
	I1028 17:26:39.014311   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.014705   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.014731   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.014657   32807 retry.go:31] will retry after 249.568431ms: waiting for machine to come up
	I1028 17:26:39.266003   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.266417   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.266438   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.266379   32807 retry.go:31] will retry after 332.313659ms: waiting for machine to come up
	I1028 17:26:39.599811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.600199   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.600224   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.600155   32807 retry.go:31] will retry after 498.320063ms: waiting for machine to come up
	I1028 17:26:40.099601   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.100068   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.100102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.100010   32807 retry.go:31] will retry after 620.508522ms: waiting for machine to come up
	I1028 17:26:40.721631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.722075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.722102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.722032   32807 retry.go:31] will retry after 786.320854ms: waiting for machine to come up
	I1028 17:26:41.509664   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:41.510180   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:41.510208   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:41.510141   32807 retry.go:31] will retry after 1.021116287s: waiting for machine to come up
	I1028 17:26:42.532494   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:42.532913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:42.532943   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:42.532860   32807 retry.go:31] will retry after 1.335656065s: waiting for machine to come up
	I1028 17:26:43.870447   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:43.870913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:43.870940   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:43.870865   32807 retry.go:31] will retry after 1.720265412s: waiting for machine to come up
	I1028 17:26:45.593694   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:45.594300   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:45.594326   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:45.594243   32807 retry.go:31] will retry after 1.629048478s: waiting for machine to come up
	I1028 17:26:47.224808   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:47.225182   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:47.225207   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:47.225159   32807 retry.go:31] will retry after 2.592881751s: waiting for machine to come up
	I1028 17:26:49.819232   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:49.819722   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:49.819742   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:49.819691   32807 retry.go:31] will retry after 2.406064511s: waiting for machine to come up
	I1028 17:26:52.227365   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:52.227723   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:52.227744   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:52.227706   32807 retry.go:31] will retry after 4.047640597s: waiting for machine to come up
	I1028 17:26:56.276662   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:56.277135   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:56.277158   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:56.277104   32807 retry.go:31] will retry after 4.243512083s: waiting for machine to come up
	I1028 17:27:00.523220   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523671   32020 main.go:141] libmachine: (ha-381619-m03) Found IP for machine: 192.168.39.17
	I1028 17:27:00.523698   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has current primary IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523706   32020 main.go:141] libmachine: (ha-381619-m03) Reserving static IP address...
	I1028 17:27:00.524025   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "ha-381619-m03", mac: "52:54:00:d7:8c:62", ip: "192.168.39.17"} in network mk-ha-381619
	I1028 17:27:00.592781   32020 main.go:141] libmachine: (ha-381619-m03) Reserved static IP address: 192.168.39.17
	I1028 17:27:00.592808   32020 main.go:141] libmachine: (ha-381619-m03) Waiting for SSH to be available...
	I1028 17:27:00.592817   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:00.595728   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.595996   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619
	I1028 17:27:00.596032   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:d7:8c:62
	I1028 17:27:00.596173   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:00.596195   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:00.596242   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:00.596266   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:00.596292   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:00.599869   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:27:00.599886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:27:00.599893   32020 main.go:141] libmachine: (ha-381619-m03) DBG | command : exit 0
	I1028 17:27:00.599897   32020 main.go:141] libmachine: (ha-381619-m03) DBG | err     : exit status 255
	I1028 17:27:00.599912   32020 main.go:141] libmachine: (ha-381619-m03) DBG | output  : 
	I1028 17:27:03.600719   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:03.602993   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603307   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.603342   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603475   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:03.603507   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:03.603540   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:03.603558   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:03.603573   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:03.732419   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 17:27:03.732661   32020 main.go:141] libmachine: (ha-381619-m03) KVM machine creation complete!
	I1028 17:27:03.732966   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:03.733514   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733669   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733799   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:27:03.733816   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetState
	I1028 17:27:03.734895   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:27:03.734910   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:27:03.734928   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:27:03.734939   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.737530   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.737905   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.737933   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.738103   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.738238   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738419   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738528   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.738669   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.738865   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.738879   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:27:03.843630   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:03.843655   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:27:03.843666   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.846510   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.846865   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.846886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.847091   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.847261   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847412   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847510   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.847671   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.847870   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.847884   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:27:03.953430   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:27:03.953486   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:27:03.953497   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:27:03.953508   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.953779   32020 buildroot.go:166] provisioning hostname "ha-381619-m03"
	I1028 17:27:03.953819   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.954012   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.956989   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957430   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.957456   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957613   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.957773   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.957930   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.958072   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.958232   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.958456   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.958476   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m03 && echo "ha-381619-m03" | sudo tee /etc/hostname
	I1028 17:27:04.082564   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m03
	
	I1028 17:27:04.082596   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.085190   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085543   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.085567   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.085952   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086175   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.086298   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.086473   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.086494   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:27:04.201141   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:04.201171   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:27:04.201191   32020 buildroot.go:174] setting up certificates
	I1028 17:27:04.201204   32020 provision.go:84] configureAuth start
	I1028 17:27:04.201213   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:04.201449   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.204201   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.204661   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204749   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.206751   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.207092   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207247   32020 provision.go:143] copyHostCerts
	I1028 17:27:04.207276   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207314   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:27:04.207334   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207429   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:27:04.207519   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207543   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:27:04.207552   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207589   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:27:04.207646   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207670   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:27:04.207679   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207710   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:27:04.207772   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m03 san=[127.0.0.1 192.168.39.17 ha-381619-m03 localhost minikube]
	I1028 17:27:04.311071   32020 provision.go:177] copyRemoteCerts
	I1028 17:27:04.311121   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:27:04.311145   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.313577   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.313977   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.314019   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.314168   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.314347   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.314472   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.314623   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.403135   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:27:04.403211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:27:04.427834   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:27:04.427894   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:27:04.450833   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:27:04.450900   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:27:04.473452   32020 provision.go:87] duration metric: took 272.234677ms to configureAuth
	I1028 17:27:04.473476   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:27:04.473653   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:04.473713   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.476526   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.476861   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.476881   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.477065   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.477235   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477353   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477466   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.477631   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.477821   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.477837   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:27:04.708532   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:27:04.708562   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:27:04.708571   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetURL
	I1028 17:27:04.709704   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using libvirt version 6000000
	I1028 17:27:04.711553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.711850   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.711877   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.712051   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:27:04.712065   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:27:04.712074   32020 client.go:171] duration metric: took 27.620592933s to LocalClient.Create
	I1028 17:27:04.712101   32020 start.go:167] duration metric: took 27.620663816s to libmachine.API.Create "ha-381619"
	I1028 17:27:04.712114   32020 start.go:293] postStartSetup for "ha-381619-m03" (driver="kvm2")
	I1028 17:27:04.712128   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:27:04.712149   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.712379   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:27:04.712408   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.714536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.714835   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.714862   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.715020   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.715209   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.715341   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.715464   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.799357   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:27:04.803701   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:27:04.803723   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:27:04.803779   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:27:04.803846   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:27:04.803856   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:27:04.803932   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:27:04.813520   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:04.836571   32020 start.go:296] duration metric: took 124.443928ms for postStartSetup
	I1028 17:27:04.836615   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:04.837172   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.839735   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840084   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.840105   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840305   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:27:04.840512   32020 start.go:128] duration metric: took 27.767033157s to createHost
	I1028 17:27:04.840535   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.842741   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.843096   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843188   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.843354   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843499   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843648   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.843814   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.843957   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.843967   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:27:04.948925   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136424.929789330
	
	I1028 17:27:04.948945   32020 fix.go:216] guest clock: 1730136424.929789330
	I1028 17:27:04.948951   32020 fix.go:229] Guest: 2024-10-28 17:27:04.92978933 +0000 UTC Remote: 2024-10-28 17:27:04.840524406 +0000 UTC m=+152.171492636 (delta=89.264924ms)
	I1028 17:27:04.948966   32020 fix.go:200] guest clock delta is within tolerance: 89.264924ms
	I1028 17:27:04.948971   32020 start.go:83] releasing machines lock for "ha-381619-m03", held for 27.875595959s
	I1028 17:27:04.948986   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.949230   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.952087   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.952552   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.952580   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.954772   32020 out.go:177] * Found network options:
	I1028 17:27:04.956124   32020 out.go:177]   - NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:04.957329   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957826   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957978   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.958075   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:27:04.958124   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.958183   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:27:04.958205   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.960811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961141   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961168   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961186   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961307   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961462   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.961599   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.961617   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961637   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961711   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.961806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961908   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.962057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.962208   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:05.194026   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:27:05.201042   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:27:05.201105   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:27:05.217646   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:27:05.217662   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:27:05.217711   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:27:05.236089   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:27:05.251712   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:27:05.251757   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:27:05.266922   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:27:05.282192   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:27:05.400766   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:27:05.540458   32020 docker.go:233] disabling docker service ...
	I1028 17:27:05.540536   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:27:05.554384   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:27:05.566632   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:27:05.704365   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:27:05.814298   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:27:05.832161   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:27:05.850391   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:27:05.850440   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.860158   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:27:05.860214   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.870182   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.880040   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.890188   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:27:05.901036   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.911295   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.928814   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.939099   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:27:05.949052   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:27:05.949107   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:27:05.961188   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:27:05.970308   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:06.082126   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:27:06.186312   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:27:06.186399   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:27:06.191449   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:27:06.191503   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:27:06.195251   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:27:06.231675   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:27:06.231743   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.263999   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.295360   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:27:06.296610   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:27:06.297916   32020 out.go:177]   - env NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:06.299066   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:06.302357   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.302805   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:06.302853   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.303125   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:27:06.307684   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:06.322487   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:27:06.322674   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:06.322921   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.322954   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.337329   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I1028 17:27:06.337793   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.338350   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.338369   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.338643   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.338806   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:27:06.340173   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:06.340491   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.340528   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.354028   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I1028 17:27:06.354441   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.354853   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.354871   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.355207   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.355398   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:06.355555   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.17
	I1028 17:27:06.355568   32020 certs.go:194] generating shared ca certs ...
	I1028 17:27:06.355587   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.355706   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:27:06.355743   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:27:06.355752   32020 certs.go:256] generating profile certs ...
	I1028 17:27:06.355818   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:27:06.355840   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131
	I1028 17:27:06.355854   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.17 192.168.39.254]
	I1028 17:27:06.615352   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 ...
	I1028 17:27:06.615384   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131: {Name:mk30b1e5a01615c193463ae31058813eb757a15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615571   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 ...
	I1028 17:27:06.615587   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131: {Name:mkc1142cb1e41a27aeb0597e6f743604179f8b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615684   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:27:06.615844   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:27:06.616012   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:27:06.616031   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:27:06.616048   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:27:06.616067   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:27:06.616091   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:27:06.616107   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:27:06.616121   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:27:06.616138   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:27:06.632549   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:27:06.632628   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:27:06.632669   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:27:06.632680   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:27:06.632702   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:27:06.632732   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:27:06.632764   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:27:06.632808   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:06.632854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:27:06.632879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:06.632897   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:27:06.632955   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:06.635620   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.635992   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:06.636039   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.636203   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:06.636373   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:06.636547   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:06.636691   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:06.708743   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:27:06.714395   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:27:06.725274   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:27:06.729452   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:27:06.739682   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:27:06.743778   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:27:06.753533   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:27:06.757406   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:27:06.768515   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:27:06.772684   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:27:06.783594   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:27:06.788182   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:27:06.798917   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:27:06.824680   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:27:06.848168   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:27:06.870934   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:27:06.894622   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 17:27:06.916995   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:27:06.939854   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:27:06.962079   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:27:06.985176   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:27:07.007959   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:27:07.031196   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:27:07.054116   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:27:07.071809   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:27:07.087821   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:27:07.105114   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:27:07.121456   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:27:07.137929   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:27:07.153936   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:27:07.169928   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:27:07.176125   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:27:07.186611   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191749   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191791   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.197474   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:27:07.208145   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:27:07.219642   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224041   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224081   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.229665   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:27:07.240477   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:27:07.251279   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255404   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255446   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.260823   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:27:07.271234   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:27:07.275094   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:27:07.275142   32020 kubeadm.go:934] updating node {m03 192.168.39.17 8443 v1.31.2 crio true true} ...
	I1028 17:27:07.275277   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:27:07.275318   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:27:07.275356   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:27:07.290975   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:27:07.291032   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:27:07.291070   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.301885   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:27:07.301930   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.312754   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 17:27:07.312779   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312836   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:27:07.312864   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 17:27:07.312926   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312927   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:07.317184   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:27:07.317211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:27:07.352999   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:27:07.353042   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:27:07.353044   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.353130   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.410351   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:27:07.410406   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:27:08.136367   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:27:08.145689   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 17:27:08.162514   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:27:08.178802   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:27:08.195002   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:27:08.198953   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:08.210803   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:08.352163   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:08.377094   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:08.377585   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:08.377645   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:08.394262   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I1028 17:27:08.394687   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:08.395242   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:08.395276   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:08.395635   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:08.395837   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:08.396078   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:27:08.396215   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:27:08.396230   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:08.399082   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399537   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:08.399566   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399713   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:08.399904   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:08.400043   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:08.400171   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:08.552541   32020 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:08.552592   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I1028 17:27:30.870343   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (22.317699091s)
	I1028 17:27:30.870408   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:27:31.352565   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m03 minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:27:31.535264   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:27:31.653788   32020 start.go:319] duration metric: took 23.257712014s to joinCluster
	I1028 17:27:31.653906   32020 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:31.654293   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:31.655305   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:27:31.656854   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:31.931462   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:32.007668   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:27:32.008012   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:27:32.008099   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:27:32.008418   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:32.008555   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.008568   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.008580   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.008590   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.012013   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:32.509493   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.509514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.509522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.509526   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.512995   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:33.008792   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.008813   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.008823   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.008831   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.013277   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:33.509021   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.509043   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.509053   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.509059   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.512568   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.009494   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.009514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.009522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.009525   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.012872   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.013477   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:34.508671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.508698   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.508711   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.508717   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.511657   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.009518   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.009538   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.009546   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.009549   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.012353   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.509512   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.509539   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.509551   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.509564   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.513144   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.009477   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.009496   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.009503   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.009508   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.012424   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:36.509250   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.509279   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.509290   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.509295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.512794   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.513405   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:37.008636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.008657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.008668   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.008676   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.011455   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:37.509093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.509123   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.509127   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.512558   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.009185   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.009214   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.009222   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.009226   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.012314   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.508924   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.508943   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.508951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.508955   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.511947   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.008656   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.008679   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.008691   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.008698   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.011261   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.011779   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:39.509251   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.509272   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.509279   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.509283   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.512371   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:40.009266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.009299   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.013354   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:40.509289   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.509307   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.509315   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.509320   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.512591   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:41.009123   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.009146   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.009163   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.014310   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:41.014943   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:41.509077   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.509126   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.509134   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.512425   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.008587   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.008609   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.008621   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.008627   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.012270   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.509586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.509607   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.509615   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.509621   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.512638   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:43.009220   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.009238   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.009248   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.009256   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.012180   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.508622   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.508646   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.508656   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.508660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.511470   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.512019   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:44.009130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.009150   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.009161   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.012525   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:44.509423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.509446   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.509457   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.509462   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.513302   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.009198   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.009218   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.009225   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.009230   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.012566   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.508621   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.508641   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.508649   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.508652   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.511562   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:45.512081   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:46.008747   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.008770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.008778   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.008782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.011847   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:46.509246   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.509269   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.509277   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.509281   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.512939   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:47.008680   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.008703   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.008713   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.008719   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.030138   32020 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1028 17:27:47.508630   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.508650   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.508657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.508663   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.514479   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:47.515054   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:48.008911   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.008931   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.008940   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.008944   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.012001   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:48.509098   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.509121   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.509132   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.509138   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.512351   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.008615   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.008635   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.008643   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.008647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.011780   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.508700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.508723   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.508731   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.508735   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.511993   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.008627   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.008648   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.008657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.008660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.012285   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.012911   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:50.509280   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.509301   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.509309   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.509321   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.512855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.009269   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.009303   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.012097   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.509273   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.509293   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.509304   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.509309   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.512305   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.513072   32020 node_ready.go:49] node "ha-381619-m03" has status "Ready":"True"
	I1028 17:27:51.513099   32020 node_ready.go:38] duration metric: took 19.504662706s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:51.513110   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:51.513182   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:51.513193   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.513203   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.513209   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.518727   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.525983   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.526072   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:27:51.526088   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.526100   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.526111   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.531963   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.532739   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.532753   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.532761   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.532764   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.535083   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.535631   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.535649   32020 pod_ready.go:82] duration metric: took 9.646144ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535657   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:27:51.535707   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.535714   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.535721   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.538224   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.538964   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.538979   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.538986   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.538990   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.541964   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.542349   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.542364   32020 pod_ready.go:82] duration metric: took 6.701109ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542375   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542424   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:27:51.542434   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.542441   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.542447   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.544839   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.545361   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.545376   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.545385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.545392   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.547384   32020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 17:27:51.547876   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.547890   32020 pod_ready.go:82] duration metric: took 5.50604ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547898   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:27:51.547944   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.547951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.547954   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.549977   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.550423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:51.550435   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.550442   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.550445   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.552459   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.553082   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.553099   32020 pod_ready.go:82] duration metric: took 5.194272ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.553110   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.709397   32020 request.go:632] Waited for 156.217787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709446   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709451   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.709458   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.709461   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.712548   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.909629   32020 request.go:632] Waited for 196.367534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909689   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.909700   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.909708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.918132   32020 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 17:27:51.918809   32020 pod_ready.go:93] pod "etcd-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.918828   32020 pod_ready.go:82] duration metric: took 365.711465ms for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.918850   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.109303   32020 request.go:632] Waited for 190.370368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109365   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109373   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.109383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.109388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.112392   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.309408   32020 request.go:632] Waited for 196.27481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309460   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309464   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.309471   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.309475   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.312195   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.312752   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.312777   32020 pod_ready.go:82] duration metric: took 393.917667ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.312791   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.509760   32020 request.go:632] Waited for 196.900981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509849   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509861   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.509872   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.509878   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.513709   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.709720   32020 request.go:632] Waited for 195.19818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709771   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709777   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.709784   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.709789   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.712910   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.713496   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.713513   32020 pod_ready.go:82] duration metric: took 400.71419ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.713525   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.910080   32020 request.go:632] Waited for 196.490754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910131   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910138   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.910148   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.910155   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.913570   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.109611   32020 request.go:632] Waited for 195.067242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109675   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109680   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.109688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.109692   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.112419   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:53.113243   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.113258   32020 pod_ready.go:82] duration metric: took 399.726328ms for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.113269   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.309322   32020 request.go:632] Waited for 195.985489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309373   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309378   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.309385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.309389   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.312514   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.509641   32020 request.go:632] Waited for 196.355986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509756   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.509788   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.509809   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.513067   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.513631   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.513648   32020 pod_ready.go:82] duration metric: took 400.372385ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.513660   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.709756   32020 request.go:632] Waited for 196.030975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709821   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709829   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.709838   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.709847   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.713250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.910289   32020 request.go:632] Waited for 196.241506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910347   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910352   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.910360   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.910365   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.913501   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.914111   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.914128   32020 pod_ready.go:82] duration metric: took 400.460847ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.914138   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.110262   32020 request.go:632] Waited for 196.057341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110321   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110328   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.110338   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.110344   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.113686   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.309625   32020 request.go:632] Waited for 195.198525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309704   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.309715   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.309724   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.312970   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.313530   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.313550   32020 pod_ready.go:82] duration metric: took 399.405564ms for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.313561   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.509582   32020 request.go:632] Waited for 195.958227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509651   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.509664   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.509669   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.513356   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.709469   32020 request.go:632] Waited for 195.28008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709541   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709547   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.709555   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.709562   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.712778   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.713684   32020 pod_ready.go:93] pod "kube-proxy-2z74r" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.713706   32020 pod_ready.go:82] duration metric: took 400.138051ms for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.713722   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.909768   32020 request.go:632] Waited for 195.979649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909859   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909871   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.909882   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.909893   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.912982   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.110064   32020 request.go:632] Waited for 196.359608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110135   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.110142   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.110148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.113297   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.113778   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.113796   32020 pod_ready.go:82] duration metric: took 400.063804ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.113805   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.309960   32020 request.go:632] Waited for 196.087241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310011   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310017   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.310027   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.310040   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.313630   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.509848   32020 request.go:632] Waited for 195.356609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509902   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509907   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.509917   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.509922   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.513283   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.513872   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.513891   32020 pod_ready.go:82] duration metric: took 400.079859ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.513903   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.709489   32020 request.go:632] Waited for 195.521691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709543   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709558   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.709582   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.709589   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.713346   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.910316   32020 request.go:632] Waited for 196.337736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910371   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910375   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.910383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.910388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.913484   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.914099   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.914115   32020 pod_ready.go:82] duration metric: took 400.201992ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.914124   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.110258   32020 request.go:632] Waited for 196.039546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110326   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110331   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.110337   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.110342   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.113332   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:56.310263   32020 request.go:632] Waited for 196.319737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310334   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310355   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.310370   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.310379   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.313786   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.314505   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.314532   32020 pod_ready.go:82] duration metric: took 400.399291ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.314546   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.510327   32020 request.go:632] Waited for 195.699418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510378   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510383   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.510390   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.510394   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.513464   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.709328   32020 request.go:632] Waited for 195.274185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709385   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709391   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.709398   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.709403   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.712740   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.713420   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.713436   32020 pod_ready.go:82] duration metric: took 398.882403ms for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.713446   32020 pod_ready.go:39] duration metric: took 5.200325366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:56.713469   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:27:56.713519   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:27:56.729002   32020 api_server.go:72] duration metric: took 25.075050157s to wait for apiserver process to appear ...
	I1028 17:27:56.729025   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:27:56.729051   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:27:56.734141   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:27:56.734212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:27:56.734223   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.734234   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.734242   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.735154   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:27:56.735212   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:27:56.735228   32020 api_server.go:131] duration metric: took 6.196303ms to wait for apiserver health ...
	I1028 17:27:56.735237   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:27:56.909657   32020 request.go:632] Waited for 174.332812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909707   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909712   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.909720   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.909725   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.915545   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:56.922175   32020 system_pods.go:59] 24 kube-system pods found
	I1028 17:27:56.922215   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:56.922225   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:56.922230   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:56.922235   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:56.922240   32020 system_pods.go:61] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:56.922248   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:56.922253   32020 system_pods.go:61] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:56.922259   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:56.922267   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:56.922273   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:56.922281   32020 system_pods.go:61] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:56.922288   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:56.922294   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:56.922302   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:56.922308   32020 system_pods.go:61] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:56.922317   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:56.922327   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:56.922335   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:56.922341   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:56.922348   32020 system_pods.go:61] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:56.922352   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:56.922355   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:56.922361   32020 system_pods.go:61] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:56.922364   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:56.922369   32020 system_pods.go:74] duration metric: took 187.124012ms to wait for pod list to return data ...
	I1028 17:27:56.922378   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:27:57.109949   32020 request.go:632] Waited for 187.506133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110004   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110012   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.110022   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.110033   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.113502   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:57.113628   32020 default_sa.go:45] found service account: "default"
	I1028 17:27:57.113645   32020 default_sa.go:55] duration metric: took 191.260682ms for default service account to be created ...
	I1028 17:27:57.113656   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:27:57.309925   32020 request.go:632] Waited for 196.205305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310024   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310036   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.310047   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.310053   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.315888   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:57.322856   32020 system_pods.go:86] 24 kube-system pods found
	I1028 17:27:57.322880   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:57.322886   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:57.322890   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:57.322893   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:57.322897   32020 system_pods.go:89] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:57.322900   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:57.322904   32020 system_pods.go:89] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:57.322907   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:57.322918   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:57.322927   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:57.322932   32020 system_pods.go:89] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:57.322940   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:57.322946   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:57.322951   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:57.322958   32020 system_pods.go:89] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:57.322966   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:57.322971   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:57.322978   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:57.322986   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:57.322991   32020 system_pods.go:89] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:57.322999   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:57.323006   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:57.323011   32020 system_pods.go:89] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:57.323018   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:57.323027   32020 system_pods.go:126] duration metric: took 209.364489ms to wait for k8s-apps to be running ...
	I1028 17:27:57.323045   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:27:57.323123   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:57.338248   32020 system_svc.go:56] duration metric: took 15.198158ms WaitForService to wait for kubelet
	I1028 17:27:57.338268   32020 kubeadm.go:582] duration metric: took 25.684324158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:27:57.338294   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:27:57.509596   32020 request.go:632] Waited for 171.215252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509662   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509677   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.509688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.509699   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.514522   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:57.515701   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515733   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515769   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515779   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515785   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515800   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515810   32020 node_conditions.go:105] duration metric: took 177.507704ms to run NodePressure ...
	I1028 17:27:57.515829   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:27:57.515863   32020 start.go:255] writing updated cluster config ...
	I1028 17:27:57.516171   32020 ssh_runner.go:195] Run: rm -f paused
	I1028 17:27:57.567306   32020 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:27:57.569290   32020 out.go:177] * Done! kubectl is now configured to use "ha-381619" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.468558195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704468534967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16070404-4790-4113-8a0d-e60149c89f4a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.469605209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5ee598c-a76e-451c-abfc-0f382dcd644d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.469683611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5ee598c-a76e-451c-abfc-0f382dcd644d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.469984323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5ee598c-a76e-451c-abfc-0f382dcd644d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.505705709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61a767f4-71af-4ea3-97fa-255aa7c1b795 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.505792009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61a767f4-71af-4ea3-97fa-255aa7c1b795 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.507027395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ae18e57-aa9d-4ae1-9151-79de003c6bcd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.507421860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704507401029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ae18e57-aa9d-4ae1-9151-79de003c6bcd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.508036134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6deb494f-3f11-4b87-b978-038b629f1b6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.508111112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6deb494f-3f11-4b87-b978-038b629f1b6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.508314045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6deb494f-3f11-4b87-b978-038b629f1b6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.550417829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab038a7e-5205-437c-ad81-49de482fa861 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.550520048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab038a7e-5205-437c-ad81-49de482fa861 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.551659083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=504267dc-84f2-4e0c-b195-8961391268ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.552319450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704552294734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=504267dc-84f2-4e0c-b195-8961391268ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.553047794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf4eb988-2f8e-46b0-974f-a079a66799bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.553225143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf4eb988-2f8e-46b0-974f-a079a66799bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.553586627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf4eb988-2f8e-46b0-974f-a079a66799bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.606322225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b1450a5-aa58-46f6-b876-2cd6fef40ca6 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.606416434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b1450a5-aa58-46f6-b876-2cd6fef40ca6 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.607500748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=843d8736-4db0-4a97-88af-d98f28c9795c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.609563956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704609537948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=843d8736-4db0-4a97-88af-d98f28c9795c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.612399554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d25dd2ea-16fc-4b20-9b56-22c794ae7908 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.612565907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d25dd2ea-16fc-4b20-9b56-22c794ae7908 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:44 ha-381619 crio[660]: time="2024-10-28 17:31:44.613126555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d25dd2ea-16fc-4b20-9b56-22c794ae7908 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb3c00b93a7e6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   32dd7ef5c8db8       coredns-7c65d6cfc9-mtmvl
	439a12fd4f2e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   a8d9ef07a9de9       coredns-7c65d6cfc9-6lp7c
	32b25385ac6d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    6 minutes ago       Running             storage-provisioner       0                   cdf8a7008daaa       storage-provisioner
	02eaa5b848022       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                    6 minutes ago       Running             kindnet-cni               0                   ec93f4cb498de       kindnet-vj9vj
	4c2af4b0e8f70       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                    6 minutes ago       Running             kube-proxy                0                   31e8db8e13561       kube-proxy-mqdtj
	8820dc5a1a258       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215   6 minutes ago       Running             kube-vip                  0                   0440b64671662       kube-vip-ha-381619
	a2a4ad9e37b9c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                    6 minutes ago       Running             kube-apiserver            0                   8535275eaad56       kube-apiserver-ha-381619
	c4311ab52a438       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                    6 minutes ago       Running             kube-controller-manager   0                   75b5ea16f2e6b       kube-controller-manager-ha-381619
	5d299a6ffacac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                    6 minutes ago       Running             etcd                      0                   2d476f176dee3       etcd-ha-381619
	8f6c077dbde89       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                    6 minutes ago       Running             kube-scheduler            0                   2c5f11da0112e       kube-scheduler-ha-381619
	
	
	==> coredns [439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f] <==
	[INFO] 10.244.2.2:53226 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001368106s
	[INFO] 10.244.2.2:36312 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118066s
	[INFO] 10.244.1.2:38518 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000323292s
	[INFO] 10.244.1.2:47890 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000118239s
	[INFO] 10.244.1.2:45070 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000130482s
	[INFO] 10.244.1.2:39687 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001925125s
	[INFO] 10.244.2.3:53812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151587s
	[INFO] 10.244.2.3:54592 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180193s
	[INFO] 10.244.2.3:46470 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138925s
	[INFO] 10.244.2.2:48981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776352s
	[INFO] 10.244.2.2:35249 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131241s
	[INFO] 10.244.2.2:53917 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177037s
	[INFO] 10.244.2.2:34049 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001120542s
	[INFO] 10.244.1.2:35278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111663s
	[INFO] 10.244.1.2:37962 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106563s
	[INFO] 10.244.1.2:40545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246646s
	[INFO] 10.244.1.2:40814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215904s
	[INFO] 10.244.2.3:49806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000229773s
	[INFO] 10.244.2.2:44763 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117588s
	[INFO] 10.244.2.3:48756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125652s
	[INFO] 10.244.2.3:41328 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177165s
	[INFO] 10.244.2.3:35650 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137462s
	[INFO] 10.244.2.2:60478 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163829s
	[INFO] 10.244.2.2:51252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106643s
	[INFO] 10.244.1.2:56942 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137828s
	
	
	==> coredns [fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30] <==
	[INFO] 10.244.2.3:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131477s
	[INFO] 10.244.2.2:46692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196624s
	[INFO] 10.244.2.2:38402 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226272s
	[INFO] 10.244.2.2:34845 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153045s
	[INFO] 10.244.2.2:49870 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121016s
	[INFO] 10.244.1.2:51535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001893779s
	[INFO] 10.244.1.2:36412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109955s
	[INFO] 10.244.1.2:53434 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000734s
	[INFO] 10.244.1.2:38007 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101464s
	[INFO] 10.244.2.3:39546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.2.3:49299 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158392s
	[INFO] 10.244.2.3:42607 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102312s
	[INFO] 10.244.2.2:36855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150344s
	[INFO] 10.244.2.2:46374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00016867s
	[INFO] 10.244.2.2:37275 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112218s
	[INFO] 10.244.1.2:41523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017259s
	[INFO] 10.244.1.2:43696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000347465s
	[INFO] 10.244.1.2:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161099s
	[INFO] 10.244.1.2:59192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118287s
	[INFO] 10.244.2.3:42470 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243243s
	[INFO] 10.244.2.2:35932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020307s
	[INFO] 10.244.2.2:39597 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184178s
	[INFO] 10.244.1.2:43973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139891s
	[INFO] 10.244.1.2:41644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171411s
	[INFO] 10.244.1.2:47984 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086921s
	
	
	==> describe nodes <==
	Name:               ha-381619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:25:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-381619
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ff487634ba146ebb8929cc99763c422
	  System UUID:                1ff48763-4ba1-46eb-b892-9cc99763c422
	  Boot ID:                    ce5a7712-d088-475f-80ec-c8b7dee605bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6lp7c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 coredns-7c65d6cfc9-mtmvl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 etcd-ha-381619                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-vj9vj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m26s
	  kube-system                 kube-apiserver-ha-381619             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-ha-381619    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-mqdtj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-scheduler-ha-381619             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-381619                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m26s                  kube-proxy       
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m37s)  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m37s)  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m37s)  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s                  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s                  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s                  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  NodeReady                6m14s                  kubelet          Node ha-381619 status is now: NodeReady
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	
	
	Name:               ha-381619-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:26:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:29:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-381619-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe038bc140e34a24bfa4fe915bd6a83f
	  System UUID:                fe038bc1-40e3-4a24-bfa4-fe915bd6a83f
	  Boot ID:                    2395418c-cd94-4285-8c38-7cd31a1df92a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dxwnw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-381619-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m34s
	  kube-system                 kindnet-2ggdz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-381619-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-controller-manager-ha-381619-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-nrfgq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-381619-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-vip-ha-381619-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m34s (x2 over 5m35s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x2 over 5m35s)  kubelet          Node ha-381619-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x2 over 5m35s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeReady                5m12s                  kubelet          Node ha-381619-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeNotReady             98s                    node-controller  Node ha-381619-m02 status is now: NodeNotReady
	
	
	Name:               ha-381619-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:27:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-381619-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f056208103704b70bfb827d2e01fcbd6
	  System UUID:                f0562081-0370-4b70-bfb8-27d2e01fcbd6
	  Boot ID:                    3c41c87b-23bb-455f-8665-1ca87b736f8b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-26cg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  default                     busybox-7dff88458-9n6bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-381619-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m14s
	  kube-system                 kindnet-82dqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m16s
	  kube-system                 kube-apiserver-ha-381619-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-ha-381619-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-proxy-2z74r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-ha-381619-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-vip-ha-381619-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node ha-381619-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s (x7 over 4m16s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	
	
	Name:               ha-381619-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_28_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:28:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-381619-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c794eda5b61f4b51846d119496d6611f
	  System UUID:                c794eda5-b61f-4b51-846d-119496d6611f
	  Boot ID:                    d054e196-c392-4e7e-a1b3-e459ee7974d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzqx2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-7dwhb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node ha-381619-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-381619-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 17:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050172] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854623] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.491096] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.570925] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.341236] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.065909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059908] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.181734] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.112783] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.252616] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct28 17:25] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.759910] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.058388] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.418126] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.806365] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +4.131777] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.537990] kauditd_printk_skb: 41 callbacks suppressed
	[  +9.942403] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9] <==
	{"level":"warn","ts":"2024-10-28T17:31:44.876202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.888258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.895416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.899361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.907517Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.917686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.935010Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.939453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.942211Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.947682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.956141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.963552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.967355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.970794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.977017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.977947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.983226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.989386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.992632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.995695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:44.998936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:45.004107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:45.009401Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:45.054370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:45.056502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:31:45 up 7 min,  0 users,  load average: 0.08, 0.21, 0.12
	Linux ha-381619 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3] <==
	I1028 17:31:10.294565       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:20.291776       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:20.291826       1 main.go:300] handling current node
	I1028 17:31:20.291854       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:20.291923       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:20.292122       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:20.292149       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:20.292226       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:20.292249       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:30.295378       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:30.295542       1 main.go:300] handling current node
	I1028 17:31:30.295590       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:30.295611       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:30.296072       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:30.296113       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:30.296285       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:30.296308       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:40.295696       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:40.295776       1 main.go:300] handling current node
	I1028 17:31:40.295795       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:40.295804       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:40.296160       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:40.296192       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:40.296331       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:40.296358       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37] <==
	W1028 17:25:12.245785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I1028 17:25:12.247133       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 17:25:12.256065       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 17:25:12.326331       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 17:25:13.936309       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 17:25:13.952773       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 17:25:13.968009       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 17:25:17.830466       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1028 17:25:18.077531       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1028 17:28:07.019815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41404: use of closed network connection
	E1028 17:28:07.205390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41420: use of closed network connection
	E1028 17:28:07.386536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41448: use of closed network connection
	E1028 17:28:07.599536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41470: use of closed network connection
	E1028 17:28:07.775264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41490: use of closed network connection
	E1028 17:28:07.949242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41512: use of closed network connection
	E1028 17:28:08.118133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41522: use of closed network connection
	E1028 17:28:08.303400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41550: use of closed network connection
	E1028 17:28:08.475723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41556: use of closed network connection
	E1028 17:28:08.762057       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47594: use of closed network connection
	E1028 17:28:08.944378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47612: use of closed network connection
	E1028 17:28:09.126803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47636: use of closed network connection
	E1028 17:28:09.297149       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47658: use of closed network connection
	E1028 17:28:09.471140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47674: use of closed network connection
	E1028 17:28:09.647026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47704: use of closed network connection
	W1028 17:29:32.257515       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.230]
	
	
	==> kube-controller-manager [c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8] <==
	I1028 17:28:42.026011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.036622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.060198       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.297173       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-381619-m04"
	I1028 17:28:42.386481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.396569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.781672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.951532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.966339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:46.926084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:47.034432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:52.333791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:29:04.463505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:06.946376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:12.658007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:30:06.972035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:06.972340       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:30:06.993167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:07.005350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.940759ms"
	I1028 17:30:07.006727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.8µs"
	I1028 17:30:07.346197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:12.214622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:31.329575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619"
	
	
	==> kube-proxy [4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 17:25:18.698349       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 17:25:18.711046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E1028 17:25:18.711157       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:25:18.745433       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 17:25:18.745462       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 17:25:18.745490       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:25:18.747834       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:25:18.748160       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:25:18.748312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:25:18.749989       1 config.go:199] "Starting service config controller"
	I1028 17:25:18.750071       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:25:18.750117       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:25:18.750134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:25:18.750598       1 config.go:328] "Starting node config controller"
	I1028 17:25:18.751738       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:25:18.851210       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:25:18.851309       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:25:18.852898       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b] <==
	E1028 17:25:11.721217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.842707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:25:11.842776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.845287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:25:11.848083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.886433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:25:11.886602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1028 17:25:14.002937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:27:58.460072       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="568dfe45-5437-4cfd-8d20-2fa1e33d8999" pod="default/busybox-7dff88458-9n6bb" assumedNode="ha-381619-m03" currentNode="ha-381619-m02"
	E1028 17:27:58.471238       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m02"
	E1028 17:27:58.471407       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 568dfe45-5437-4cfd-8d20-2fa1e33d8999(default/busybox-7dff88458-9n6bb) was assumed on ha-381619-m02 but assigned to ha-381619-m03" pod="default/busybox-7dff88458-9n6bb"
	E1028 17:27:58.471445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" pod="default/busybox-7dff88458-9n6bb"
	I1028 17:27:58.471522       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m03"
	E1028 17:28:42.093317       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.093832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9291bc3b-2fa3-4a6c-99d3-7bb2a5721b25(kube-system/kindnet-fzqx2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fzqx2"
	E1028 17:28:42.094010       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-fzqx2"
	I1028 17:28:42.094225       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.149948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.154547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15a36ca9-85be-4b6a-8e4a-31495d13a0c1(kube-system/kube-proxy-7dwhb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7dwhb"
	E1028 17:28:42.156945       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" pod="kube-system/kube-proxy-7dwhb"
	I1028 17:28:42.157115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.164640       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	E1028 17:28:42.164715       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 61afb85d-818e-40a2-ad14-87c5f4541d0e(kube-system/kindnet-p6x26) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p6x26"
	E1028 17:28:42.164729       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-p6x26"
	I1028 17:28:42.164745       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	
	
	==> kubelet <==
	Oct 28 17:30:13 ha-381619 kubelet[1301]: E1028 17:30:13.976259    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136613975105937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:13 ha-381619 kubelet[1301]: E1028 17:30:13.976959    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136613975105937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979164    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979443    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.980958    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.982957    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988254    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988294    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989574    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989617    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996610    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996710    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.872137    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997852    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997963    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:23.999904    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:24.000328    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001784    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001829    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003002    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003044    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-381619 -n ha-381619
helpers_test.go:261: (dbg) Run:  kubectl --context ha-381619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.362373587s)
ha_test.go:415: expected profile "ha-381619" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-381619\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-381619\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-381619\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.230\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.171\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.17\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.224\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevi
rt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\
",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-381619 -n ha-381619
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 logs -n 25: (1.320581046s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m03_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m04 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp testdata/cp-test.txt                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m04_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03:/home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m03 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-381619 node stop m02 -v=7                                                    | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:24:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:24:32.704402   32020 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:24:32.704551   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704563   32020 out.go:358] Setting ErrFile to fd 2...
	I1028 17:24:32.704569   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704718   32020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:24:32.705246   32020 out.go:352] Setting JSON to false
	I1028 17:24:32.706049   32020 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4016,"bootTime":1730132257,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:24:32.706140   32020 start.go:139] virtualization: kvm guest
	I1028 17:24:32.708076   32020 out.go:177] * [ha-381619] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:24:32.709709   32020 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:24:32.709708   32020 notify.go:220] Checking for updates...
	I1028 17:24:32.711979   32020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:24:32.713179   32020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:24:32.714308   32020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.715427   32020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:24:32.716562   32020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:24:32.717898   32020 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:24:32.750233   32020 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 17:24:32.751376   32020 start.go:297] selected driver: kvm2
	I1028 17:24:32.751386   32020 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:24:32.751396   32020 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:24:32.752108   32020 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.752174   32020 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:24:32.765779   32020 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:24:32.765818   32020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:24:32.766066   32020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:24:32.766095   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:24:32.766149   32020 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 17:24:32.766159   32020 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:24:32.766215   32020 start.go:340] cluster config:
	{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 17:24:32.766343   32020 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.768753   32020 out.go:177] * Starting "ha-381619" primary control-plane node in "ha-381619" cluster
	I1028 17:24:32.769947   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:32.769974   32020 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:24:32.769982   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:24:32.770049   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:24:32.770062   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:24:32.770342   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:32.770362   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json: {Name:mkd5c3a5f97562236390379745e09449a8badb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:24:32.770497   32020 start.go:360] acquireMachinesLock for ha-381619: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:24:32.770539   32020 start.go:364] duration metric: took 26.277µs to acquireMachinesLock for "ha-381619"
	I1028 17:24:32.770561   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:24:32.770606   32020 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 17:24:32.772872   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:24:32.772986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:32.773028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:32.786246   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I1028 17:24:32.786651   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:32.787204   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:24:32.787223   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:32.787585   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:32.787761   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:32.787890   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:32.788041   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:24:32.788072   32020 client.go:168] LocalClient.Create starting
	I1028 17:24:32.788105   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:24:32.788134   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788152   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788202   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:24:32.788220   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788232   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788246   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:24:32.788258   32020 main.go:141] libmachine: (ha-381619) Calling .PreCreateCheck
	I1028 17:24:32.788587   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:32.789017   32020 main.go:141] libmachine: Creating machine...
	I1028 17:24:32.789034   32020 main.go:141] libmachine: (ha-381619) Calling .Create
	I1028 17:24:32.789161   32020 main.go:141] libmachine: (ha-381619) Creating KVM machine...
	I1028 17:24:32.790254   32020 main.go:141] libmachine: (ha-381619) DBG | found existing default KVM network
	I1028 17:24:32.790889   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.790760   32043 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1028 17:24:32.790924   32020 main.go:141] libmachine: (ha-381619) DBG | created network xml: 
	I1028 17:24:32.790942   32020 main.go:141] libmachine: (ha-381619) DBG | <network>
	I1028 17:24:32.790953   32020 main.go:141] libmachine: (ha-381619) DBG |   <name>mk-ha-381619</name>
	I1028 17:24:32.790960   32020 main.go:141] libmachine: (ha-381619) DBG |   <dns enable='no'/>
	I1028 17:24:32.790971   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.790981   32020 main.go:141] libmachine: (ha-381619) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 17:24:32.791022   32020 main.go:141] libmachine: (ha-381619) DBG |     <dhcp>
	I1028 17:24:32.791042   32020 main.go:141] libmachine: (ha-381619) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 17:24:32.791053   32020 main.go:141] libmachine: (ha-381619) DBG |     </dhcp>
	I1028 17:24:32.791062   32020 main.go:141] libmachine: (ha-381619) DBG |   </ip>
	I1028 17:24:32.791070   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.791079   32020 main.go:141] libmachine: (ha-381619) DBG | </network>
	I1028 17:24:32.791092   32020 main.go:141] libmachine: (ha-381619) DBG | 
	I1028 17:24:32.795776   32020 main.go:141] libmachine: (ha-381619) DBG | trying to create private KVM network mk-ha-381619 192.168.39.0/24...
	I1028 17:24:32.856590   32020 main.go:141] libmachine: (ha-381619) DBG | private KVM network mk-ha-381619 192.168.39.0/24 created
	I1028 17:24:32.856623   32020 main.go:141] libmachine: (ha-381619) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:32.856641   32020 main.go:141] libmachine: (ha-381619) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:24:32.856686   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.856608   32043 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.856733   32020 main.go:141] libmachine: (ha-381619) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:24:33.109141   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.109021   32043 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa...
	I1028 17:24:33.382423   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382288   32043 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk...
	I1028 17:24:33.382457   32020 main.go:141] libmachine: (ha-381619) DBG | Writing magic tar header
	I1028 17:24:33.382473   32020 main.go:141] libmachine: (ha-381619) DBG | Writing SSH key tar header
	I1028 17:24:33.382487   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382434   32043 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:33.382577   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 (perms=drwx------)
	I1028 17:24:33.382600   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:24:33.382611   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619
	I1028 17:24:33.382624   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:24:33.382636   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:33.382651   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:24:33.382662   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:24:33.382673   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:24:33.382683   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:24:33.382696   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:24:33.382710   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:24:33.382720   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:24:33.382733   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:33.382743   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home
	I1028 17:24:33.382755   32020 main.go:141] libmachine: (ha-381619) DBG | Skipping /home - not owner
	I1028 17:24:33.383729   32020 main.go:141] libmachine: (ha-381619) define libvirt domain using xml: 
	I1028 17:24:33.383753   32020 main.go:141] libmachine: (ha-381619) <domain type='kvm'>
	I1028 17:24:33.383763   32020 main.go:141] libmachine: (ha-381619)   <name>ha-381619</name>
	I1028 17:24:33.383771   32020 main.go:141] libmachine: (ha-381619)   <memory unit='MiB'>2200</memory>
	I1028 17:24:33.383782   32020 main.go:141] libmachine: (ha-381619)   <vcpu>2</vcpu>
	I1028 17:24:33.383791   32020 main.go:141] libmachine: (ha-381619)   <features>
	I1028 17:24:33.383800   32020 main.go:141] libmachine: (ha-381619)     <acpi/>
	I1028 17:24:33.383823   32020 main.go:141] libmachine: (ha-381619)     <apic/>
	I1028 17:24:33.383834   32020 main.go:141] libmachine: (ha-381619)     <pae/>
	I1028 17:24:33.383847   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.383857   32020 main.go:141] libmachine: (ha-381619)   </features>
	I1028 17:24:33.383868   32020 main.go:141] libmachine: (ha-381619)   <cpu mode='host-passthrough'>
	I1028 17:24:33.383876   32020 main.go:141] libmachine: (ha-381619)   
	I1028 17:24:33.383886   32020 main.go:141] libmachine: (ha-381619)   </cpu>
	I1028 17:24:33.383894   32020 main.go:141] libmachine: (ha-381619)   <os>
	I1028 17:24:33.383901   32020 main.go:141] libmachine: (ha-381619)     <type>hvm</type>
	I1028 17:24:33.383912   32020 main.go:141] libmachine: (ha-381619)     <boot dev='cdrom'/>
	I1028 17:24:33.383921   32020 main.go:141] libmachine: (ha-381619)     <boot dev='hd'/>
	I1028 17:24:33.383934   32020 main.go:141] libmachine: (ha-381619)     <bootmenu enable='no'/>
	I1028 17:24:33.383944   32020 main.go:141] libmachine: (ha-381619)   </os>
	I1028 17:24:33.383952   32020 main.go:141] libmachine: (ha-381619)   <devices>
	I1028 17:24:33.383961   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='cdrom'>
	I1028 17:24:33.383974   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/boot2docker.iso'/>
	I1028 17:24:33.383984   32020 main.go:141] libmachine: (ha-381619)       <target dev='hdc' bus='scsi'/>
	I1028 17:24:33.383994   32020 main.go:141] libmachine: (ha-381619)       <readonly/>
	I1028 17:24:33.384049   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384071   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='disk'>
	I1028 17:24:33.384079   32020 main.go:141] libmachine: (ha-381619)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:24:33.384087   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk'/>
	I1028 17:24:33.384092   32020 main.go:141] libmachine: (ha-381619)       <target dev='hda' bus='virtio'/>
	I1028 17:24:33.384099   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384104   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384111   32020 main.go:141] libmachine: (ha-381619)       <source network='mk-ha-381619'/>
	I1028 17:24:33.384116   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384122   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384127   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384134   32020 main.go:141] libmachine: (ha-381619)       <source network='default'/>
	I1028 17:24:33.384140   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384146   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384151   32020 main.go:141] libmachine: (ha-381619)     <serial type='pty'>
	I1028 17:24:33.384157   32020 main.go:141] libmachine: (ha-381619)       <target port='0'/>
	I1028 17:24:33.384180   32020 main.go:141] libmachine: (ha-381619)     </serial>
	I1028 17:24:33.384203   32020 main.go:141] libmachine: (ha-381619)     <console type='pty'>
	I1028 17:24:33.384217   32020 main.go:141] libmachine: (ha-381619)       <target type='serial' port='0'/>
	I1028 17:24:33.384235   32020 main.go:141] libmachine: (ha-381619)     </console>
	I1028 17:24:33.384247   32020 main.go:141] libmachine: (ha-381619)     <rng model='virtio'>
	I1028 17:24:33.384258   32020 main.go:141] libmachine: (ha-381619)       <backend model='random'>/dev/random</backend>
	I1028 17:24:33.384267   32020 main.go:141] libmachine: (ha-381619)     </rng>
	I1028 17:24:33.384291   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384303   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384320   32020 main.go:141] libmachine: (ha-381619)   </devices>
	I1028 17:24:33.384331   32020 main.go:141] libmachine: (ha-381619) </domain>
	I1028 17:24:33.384339   32020 main.go:141] libmachine: (ha-381619) 
	I1028 17:24:33.388368   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:d7:31:89 in network default
	I1028 17:24:33.388983   32020 main.go:141] libmachine: (ha-381619) Ensuring networks are active...
	I1028 17:24:33.389001   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:33.389577   32020 main.go:141] libmachine: (ha-381619) Ensuring network default is active
	I1028 17:24:33.389893   32020 main.go:141] libmachine: (ha-381619) Ensuring network mk-ha-381619 is active
	I1028 17:24:33.390366   32020 main.go:141] libmachine: (ha-381619) Getting domain xml...
	I1028 17:24:33.390966   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:34.558865   32020 main.go:141] libmachine: (ha-381619) Waiting to get IP...
	I1028 17:24:34.559610   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.559962   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.559982   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.559945   32043 retry.go:31] will retry after 257.179075ms: waiting for machine to come up
	I1028 17:24:34.818320   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.818636   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.818664   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.818591   32043 retry.go:31] will retry after 336.999416ms: waiting for machine to come up
	I1028 17:24:35.156955   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.157385   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.157410   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.157352   32043 retry.go:31] will retry after 376.336351ms: waiting for machine to come up
	I1028 17:24:35.534739   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.535148   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.535176   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.535109   32043 retry.go:31] will retry after 414.103212ms: waiting for machine to come up
	I1028 17:24:35.950512   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.950871   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.950902   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.950833   32043 retry.go:31] will retry after 701.752446ms: waiting for machine to come up
	I1028 17:24:36.653573   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:36.653919   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:36.653945   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:36.653879   32043 retry.go:31] will retry after 793.432647ms: waiting for machine to come up
	I1028 17:24:37.448827   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:37.449212   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:37.449233   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:37.449175   32043 retry.go:31] will retry after 894.965011ms: waiting for machine to come up
	I1028 17:24:38.345655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:38.346083   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:38.346104   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:38.346040   32043 retry.go:31] will retry after 955.035568ms: waiting for machine to come up
	I1028 17:24:39.303112   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:39.303513   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:39.303566   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:39.303470   32043 retry.go:31] will retry after 1.649236041s: waiting for machine to come up
	I1028 17:24:40.955622   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:40.956156   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:40.956183   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:40.956118   32043 retry.go:31] will retry after 1.776451571s: waiting for machine to come up
	I1028 17:24:42.733883   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:42.734354   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:42.734378   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:42.734330   32043 retry.go:31] will retry after 2.290450392s: waiting for machine to come up
	I1028 17:24:45.027299   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:45.027697   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:45.027727   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:45.027647   32043 retry.go:31] will retry after 3.000171726s: waiting for machine to come up
	I1028 17:24:48.029293   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:48.029625   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:48.029642   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:48.029599   32043 retry.go:31] will retry after 3.464287385s: waiting for machine to come up
	I1028 17:24:51.498145   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:51.498494   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:51.498520   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:51.498450   32043 retry.go:31] will retry after 4.798676944s: waiting for machine to come up
	I1028 17:24:56.301062   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301461   32020 main.go:141] libmachine: (ha-381619) Found IP for machine: 192.168.39.230
	I1028 17:24:56.301476   32020 main.go:141] libmachine: (ha-381619) Reserving static IP address...
	I1028 17:24:56.301485   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has current primary IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301800   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find host DHCP lease matching {name: "ha-381619", mac: "52:54:00:bf:e3:f2", ip: "192.168.39.230"} in network mk-ha-381619
	I1028 17:24:56.367996   32020 main.go:141] libmachine: (ha-381619) Reserved static IP address: 192.168.39.230
	I1028 17:24:56.368025   32020 main.go:141] libmachine: (ha-381619) Waiting for SSH to be available...
	I1028 17:24:56.368033   32020 main.go:141] libmachine: (ha-381619) DBG | Getting to WaitForSSH function...
	I1028 17:24:56.370488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.370848   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.370872   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.371022   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH client type: external
	I1028 17:24:56.371056   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa (-rw-------)
	I1028 17:24:56.371091   32020 main.go:141] libmachine: (ha-381619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:24:56.371104   32020 main.go:141] libmachine: (ha-381619) DBG | About to run SSH command:
	I1028 17:24:56.371114   32020 main.go:141] libmachine: (ha-381619) DBG | exit 0
	I1028 17:24:56.492195   32020 main.go:141] libmachine: (ha-381619) DBG | SSH cmd err, output: <nil>: 
	I1028 17:24:56.492449   32020 main.go:141] libmachine: (ha-381619) KVM machine creation complete!
	I1028 17:24:56.492777   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:56.493326   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493514   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493649   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:24:56.493664   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:24:56.494850   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:24:56.494862   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:24:56.494867   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:24:56.494872   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.496787   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497152   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.497174   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497302   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.497464   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497595   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497725   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.497885   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.498064   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.498078   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:24:56.595488   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.595509   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:24:56.595519   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.597859   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598187   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.598209   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598403   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.598582   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598880   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.599036   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.599254   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.599265   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:24:56.696771   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:24:56.696858   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:24:56.696872   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:24:56.696881   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697109   32020 buildroot.go:166] provisioning hostname "ha-381619"
	I1028 17:24:56.697130   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697282   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.699770   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700115   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.700139   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700271   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.700441   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700571   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700701   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.700825   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.701013   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.701029   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619 && echo "ha-381619" | sudo tee /etc/hostname
	I1028 17:24:56.814628   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:24:56.814655   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.817104   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817470   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.817491   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817657   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.817827   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.817992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.818124   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.818278   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.818455   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.818475   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:24:56.926794   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.926821   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:24:56.926841   32020 buildroot.go:174] setting up certificates
	I1028 17:24:56.926853   32020 provision.go:84] configureAuth start
	I1028 17:24:56.926865   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.927086   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:56.929479   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929816   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.929835   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929984   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.931934   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932225   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.932249   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932384   32020 provision.go:143] copyHostCerts
	I1028 17:24:56.932411   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932452   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:24:56.932465   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932554   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:24:56.932658   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932682   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:24:56.932692   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932731   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:24:56.932840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932873   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:24:56.932883   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932921   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:24:56.933013   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619 san=[127.0.0.1 192.168.39.230 ha-381619 localhost minikube]
	I1028 17:24:57.000217   32020 provision.go:177] copyRemoteCerts
	I1028 17:24:57.000264   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:24:57.000288   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.002585   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.002859   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.002887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.003010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.003192   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.003327   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.003456   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.082327   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:24:57.082386   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:24:57.108992   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:24:57.109040   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 17:24:57.131168   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:24:57.131225   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:24:57.153241   32020 provision.go:87] duration metric: took 226.378501ms to configureAuth
	I1028 17:24:57.153264   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:24:57.153419   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:24:57.153491   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.155887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156229   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.156255   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156416   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.156589   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156751   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156909   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.157032   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.157170   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.157183   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:24:57.371091   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:24:57.371116   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:24:57.371138   32020 main.go:141] libmachine: (ha-381619) Calling .GetURL
	I1028 17:24:57.372265   32020 main.go:141] libmachine: (ha-381619) DBG | Using libvirt version 6000000
	I1028 17:24:57.374388   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374694   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.374715   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374887   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:24:57.374900   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:24:57.374907   32020 client.go:171] duration metric: took 24.586826396s to LocalClient.Create
	I1028 17:24:57.374929   32020 start.go:167] duration metric: took 24.586887382s to libmachine.API.Create "ha-381619"
	I1028 17:24:57.374942   32020 start.go:293] postStartSetup for "ha-381619" (driver="kvm2")
	I1028 17:24:57.374957   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:24:57.374978   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.375196   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:24:57.375226   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.377231   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377544   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.377561   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377690   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.377841   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.378010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.378127   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.458768   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:24:57.463205   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:24:57.463222   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:24:57.463283   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:24:57.463370   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:24:57.463382   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:24:57.463492   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:24:57.473092   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:24:57.499838   32020 start.go:296] duration metric: took 124.881379ms for postStartSetup
	I1028 17:24:57.499880   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:57.500412   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.502520   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.502817   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.502846   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.503009   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:57.503210   32020 start.go:128] duration metric: took 24.732586487s to createHost
	I1028 17:24:57.503234   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.505276   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505578   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.505602   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505703   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.505855   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.505992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.506115   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.506245   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.506406   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.506418   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:24:57.608878   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136297.586420313
	
	I1028 17:24:57.608900   32020 fix.go:216] guest clock: 1730136297.586420313
	I1028 17:24:57.608919   32020 fix.go:229] Guest: 2024-10-28 17:24:57.586420313 +0000 UTC Remote: 2024-10-28 17:24:57.503223131 +0000 UTC m=+24.834191366 (delta=83.197182ms)
	I1028 17:24:57.608956   32020 fix.go:200] guest clock delta is within tolerance: 83.197182ms
	I1028 17:24:57.608963   32020 start.go:83] releasing machines lock for "ha-381619", held for 24.838412899s
	I1028 17:24:57.608987   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.609175   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.611488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611798   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.611830   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611946   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612411   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612586   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612684   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:24:57.612719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.612770   32020 ssh_runner.go:195] Run: cat /version.json
	I1028 17:24:57.612787   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.615260   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615428   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615614   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615648   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615673   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615698   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615759   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615940   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615944   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616269   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616272   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.616376   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.711561   32020 ssh_runner.go:195] Run: systemctl --version
	I1028 17:24:57.717385   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:24:57.881204   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:24:57.887117   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:24:57.887178   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:24:57.902953   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:24:57.902971   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:24:57.903029   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:24:57.919599   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:24:57.932865   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:24:57.932911   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:24:57.945714   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:24:57.958712   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:24:58.074716   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:24:58.228971   32020 docker.go:233] disabling docker service ...
	I1028 17:24:58.229043   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:24:58.242560   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:24:58.255313   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:24:58.370441   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:24:58.483893   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:24:58.497247   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:24:58.514703   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:24:58.514757   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.524413   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:24:58.524490   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.534125   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.543414   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.553077   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:24:58.562606   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.572154   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.588419   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.597992   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:24:58.606565   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:24:58.606613   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:24:58.618268   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:24:58.627230   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:24:58.734287   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:24:58.826354   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:24:58.826428   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:24:58.830997   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:24:58.831057   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:24:58.834579   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:24:58.876875   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:24:58.876953   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.903643   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.932572   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:24:58.933808   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:58.935970   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936230   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:58.936257   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936509   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:24:58.940296   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:24:58.952574   32020 kubeadm.go:883] updating cluster {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:24:58.952676   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:58.952732   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:24:58.984654   32020 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 17:24:58.984732   32020 ssh_runner.go:195] Run: which lz4
	I1028 17:24:58.988394   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 17:24:58.988478   32020 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 17:24:58.992506   32020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 17:24:58.992533   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 17:25:00.255551   32020 crio.go:462] duration metric: took 1.267100193s to copy over tarball
	I1028 17:25:00.255628   32020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 17:25:02.245448   32020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.989785325s)
	I1028 17:25:02.245479   32020 crio.go:469] duration metric: took 1.989902074s to extract the tarball
	I1028 17:25:02.245485   32020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 17:25:02.282635   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:25:02.327962   32020 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:25:02.327983   32020 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:25:02.327990   32020 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.2 crio true true} ...
	I1028 17:25:02.328079   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:02.328139   32020 ssh_runner.go:195] Run: crio config
	I1028 17:25:02.370696   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:02.370725   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:02.370738   32020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:25:02.370766   32020 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-381619 NodeName:ha-381619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:25:02.370888   32020 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-381619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:25:02.370908   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:02.370947   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:02.386589   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:02.386701   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:02.386768   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:02.396553   32020 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:25:02.396617   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 17:25:02.405738   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 17:25:02.421400   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:02.437117   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 17:25:02.452375   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 17:25:02.467922   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:02.471573   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:02.483093   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:02.609045   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:02.625565   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.230
	I1028 17:25:02.625588   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:02.625605   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.625774   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:02.625839   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:02.625856   32020 certs.go:256] generating profile certs ...
	I1028 17:25:02.625920   32020 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:02.625937   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt with IP's: []
	I1028 17:25:02.808278   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt ...
	I1028 17:25:02.808301   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt: {Name:mkc46e4b9b851301d42b46f45c8b044b11edfb36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808454   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key ...
	I1028 17:25:02.808464   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key: {Name:mkd681d3c01379608131f30441747317e91c7a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808570   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb
	I1028 17:25:02.808586   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.254]
	I1028 17:25:03.000249   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb ...
	I1028 17:25:03.000276   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb: {Name:mka7f7f8394389959cb184a46e51c1572954cddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000436   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb ...
	I1028 17:25:03.000449   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb: {Name:mk9ae1b9eef85a6c1bbc7739c982c84bfb111d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000555   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:03.000643   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:03.000695   32020 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:03.000710   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt with IP's: []
	I1028 17:25:03.126776   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt ...
	I1028 17:25:03.126802   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt: {Name:mk682452f5be7b32ad3e949275f7af954945db7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.126938   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key ...
	I1028 17:25:03.126948   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key: {Name:mk5feeb9713d67bfc630ef82b40280ce400bc4ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.127009   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:03.127027   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:03.127041   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:03.127053   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:03.127070   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:03.127083   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:03.127094   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:03.127106   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:03.127161   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:03.127194   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:03.127204   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:03.127228   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:03.127253   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:03.127274   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:03.127311   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:03.127335   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.127348   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.127360   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.127858   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:03.153264   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:03.175704   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:03.198131   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:03.220379   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 17:25:03.243352   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:25:03.265623   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:03.287951   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:03.312260   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:03.336494   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:03.363576   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:03.401524   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:25:03.430796   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:03.437428   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:03.448106   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452501   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452553   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.458194   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:03.468982   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:03.479358   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483520   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483564   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.488936   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:03.499033   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:03.509212   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513380   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513413   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.518680   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:03.528774   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:03.532547   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:03.532597   32020 kubeadm.go:392] StartCluster: {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:03.532684   32020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:25:03.532747   32020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:25:03.571597   32020 cri.go:89] found id: ""
	I1028 17:25:03.571655   32020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:25:03.581447   32020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:25:03.590775   32020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:25:03.599971   32020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:25:03.599983   32020 kubeadm.go:157] found existing configuration files:
	
	I1028 17:25:03.600011   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:25:03.608531   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:25:03.608565   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:25:03.617452   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:25:03.626079   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:25:03.626124   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:25:03.635124   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.644097   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:25:03.644143   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.653605   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:25:03.662453   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:25:03.662497   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:25:03.671488   32020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 17:25:03.865602   32020 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:25:14.531712   32020 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:25:14.531787   32020 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:25:14.531884   32020 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:25:14.532023   32020 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:25:14.532157   32020 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:25:14.532250   32020 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:25:14.533662   32020 out.go:235]   - Generating certificates and keys ...
	I1028 17:25:14.533743   32020 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:25:14.533841   32020 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:25:14.533931   32020 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:25:14.534016   32020 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:25:14.534080   32020 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:25:14.534133   32020 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:25:14.534179   32020 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:25:14.534283   32020 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534363   32020 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:25:14.534530   32020 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534620   32020 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:25:14.534728   32020 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:25:14.534800   32020 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:25:14.534868   32020 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:25:14.534934   32020 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:25:14.535013   32020 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:25:14.535092   32020 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:25:14.535200   32020 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:25:14.535281   32020 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:25:14.535399   32020 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:25:14.535478   32020 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:25:14.537017   32020 out.go:235]   - Booting up control plane ...
	I1028 17:25:14.537115   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:25:14.537184   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:25:14.537257   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:25:14.537408   32020 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:25:14.537527   32020 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:25:14.537591   32020 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:25:14.537728   32020 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:25:14.537862   32020 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:25:14.537919   32020 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001240837s
	I1028 17:25:14.537979   32020 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:25:14.538029   32020 kubeadm.go:310] [api-check] The API server is healthy after 5.745465318s
	I1028 17:25:14.538126   32020 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:25:14.538233   32020 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:25:14.538314   32020 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:25:14.538487   32020 kubeadm.go:310] [mark-control-plane] Marking the node ha-381619 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:25:14.538537   32020 kubeadm.go:310] [bootstrap-token] Using token: z48g6f.v3e9buj5ot2drke2
	I1028 17:25:14.539818   32020 out.go:235]   - Configuring RBAC rules ...
	I1028 17:25:14.539934   32020 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:25:14.540010   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:25:14.540140   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:25:14.540310   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:25:14.540484   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:25:14.540575   32020 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:25:14.540725   32020 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:25:14.540796   32020 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:25:14.540853   32020 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:25:14.540862   32020 kubeadm.go:310] 
	I1028 17:25:14.540934   32020 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:25:14.540941   32020 kubeadm.go:310] 
	I1028 17:25:14.541053   32020 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:25:14.541063   32020 kubeadm.go:310] 
	I1028 17:25:14.541098   32020 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:25:14.541149   32020 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:25:14.541207   32020 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:25:14.541220   32020 kubeadm.go:310] 
	I1028 17:25:14.541267   32020 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:25:14.541273   32020 kubeadm.go:310] 
	I1028 17:25:14.541311   32020 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:25:14.541317   32020 kubeadm.go:310] 
	I1028 17:25:14.541391   32020 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:25:14.541462   32020 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:25:14.541520   32020 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:25:14.541526   32020 kubeadm.go:310] 
	I1028 17:25:14.541594   32020 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:25:14.541676   32020 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:25:14.541684   32020 kubeadm.go:310] 
	I1028 17:25:14.541772   32020 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.541903   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 17:25:14.541939   32020 kubeadm.go:310] 	--control-plane 
	I1028 17:25:14.541952   32020 kubeadm.go:310] 
	I1028 17:25:14.542037   32020 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:25:14.542044   32020 kubeadm.go:310] 
	I1028 17:25:14.542111   32020 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.542209   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 17:25:14.542219   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:14.542223   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:14.543763   32020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 17:25:14.544966   32020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 17:25:14.550724   32020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 17:25:14.550742   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 17:25:14.570257   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 17:25:14.924676   32020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:25:14.924729   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:14.924751   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619 minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=true
	I1028 17:25:14.954780   32020 ops.go:34] apiserver oom_adj: -16
	I1028 17:25:15.130305   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:15.631369   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.131137   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.631423   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.131390   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.226452   32020 kubeadm.go:1113] duration metric: took 2.301774809s to wait for elevateKubeSystemPrivileges
	I1028 17:25:17.226483   32020 kubeadm.go:394] duration metric: took 13.693888567s to StartCluster
	I1028 17:25:17.226504   32020 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.226586   32020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.227504   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.227753   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:25:17.227749   32020 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:17.227776   32020 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 17:25:17.227845   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:25:17.227858   32020 addons.go:69] Setting storage-provisioner=true in profile "ha-381619"
	I1028 17:25:17.227896   32020 addons.go:234] Setting addon storage-provisioner=true in "ha-381619"
	I1028 17:25:17.227912   32020 addons.go:69] Setting default-storageclass=true in profile "ha-381619"
	I1028 17:25:17.227947   32020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-381619"
	I1028 17:25:17.228016   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:17.227925   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.228398   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228444   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.228490   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228533   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.243165   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I1028 17:25:17.243382   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I1028 17:25:17.243612   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.243827   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.244081   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244106   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244338   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244363   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244419   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244705   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244874   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.244986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.245028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.246886   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.247245   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 17:25:17.248034   32020 addons.go:234] Setting addon default-storageclass=true in "ha-381619"
	I1028 17:25:17.248080   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.248440   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.248495   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.248686   32020 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 17:25:17.259449   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I1028 17:25:17.259906   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.260429   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.260457   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.260757   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.260953   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.262554   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.262967   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I1028 17:25:17.263363   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.263726   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.263747   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.264078   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.264715   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.264763   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.264944   32020 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:25:17.266586   32020 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.266605   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:25:17.266623   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.269507   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.269884   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.269905   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.270038   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.270201   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.270351   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.270481   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.279872   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I1028 17:25:17.280334   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.280920   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.280938   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.281336   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.281528   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.283217   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.283405   32020 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.283421   32020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:25:17.283436   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.285906   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286319   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.286352   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286428   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.286601   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.286754   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.286885   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.359502   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:25:17.440263   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.482707   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.757670   32020 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 17:25:17.987134   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987176   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987203   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987222   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987446   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987453   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987512   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987532   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987544   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987486   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987487   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987697   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987716   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987723   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987752   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987764   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987811   32020 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 17:25:17.987831   32020 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 17:25:17.987933   32020 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 17:25:17.987946   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:17.987957   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:17.987961   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:17.988187   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.988302   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.988326   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.005294   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:25:18.006136   32020 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 17:25:18.006153   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:18.006163   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:18.006169   32020 round_trippers.go:473]     Content-Type: application/json
	I1028 17:25:18.006173   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:18.009564   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:25:18.009782   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:18.009793   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:18.010026   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:18.010041   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.010063   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:18.011483   32020 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 17:25:18.012573   32020 addons.go:510] duration metric: took 784.803587ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 17:25:18.012609   32020 start.go:246] waiting for cluster config update ...
	I1028 17:25:18.012623   32020 start.go:255] writing updated cluster config ...
	I1028 17:25:18.013902   32020 out.go:201] 
	I1028 17:25:18.015058   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:18.015120   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.016447   32020 out.go:177] * Starting "ha-381619-m02" control-plane node in "ha-381619" cluster
	I1028 17:25:18.017519   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:25:18.017534   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:25:18.017609   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:25:18.017619   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:25:18.017672   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.017831   32020 start.go:360] acquireMachinesLock for ha-381619-m02: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:25:18.017871   32020 start.go:364] duration metric: took 23.784µs to acquireMachinesLock for "ha-381619-m02"
	I1028 17:25:18.017886   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:18.017946   32020 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 17:25:18.019437   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:25:18.019500   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:18.019529   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:18.033319   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I1028 17:25:18.033727   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:18.034182   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:18.034200   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:18.034550   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:18.034715   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:18.034872   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:18.035033   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:25:18.035060   32020 client.go:168] LocalClient.Create starting
	I1028 17:25:18.035096   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:25:18.035126   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035142   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035187   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:25:18.035204   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035216   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035230   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:25:18.035237   32020 main.go:141] libmachine: (ha-381619-m02) Calling .PreCreateCheck
	I1028 17:25:18.035397   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:18.035746   32020 main.go:141] libmachine: Creating machine...
	I1028 17:25:18.035760   32020 main.go:141] libmachine: (ha-381619-m02) Calling .Create
	I1028 17:25:18.035901   32020 main.go:141] libmachine: (ha-381619-m02) Creating KVM machine...
	I1028 17:25:18.037157   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing default KVM network
	I1028 17:25:18.037313   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing private KVM network mk-ha-381619
	I1028 17:25:18.037431   32020 main.go:141] libmachine: (ha-381619-m02) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.037482   32020 main.go:141] libmachine: (ha-381619-m02) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:25:18.037542   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.037441   32379 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.037604   32020 main.go:141] libmachine: (ha-381619-m02) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:25:18.305482   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.305364   32379 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa...
	I1028 17:25:18.398014   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.397913   32379 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk...
	I1028 17:25:18.398067   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing magic tar header
	I1028 17:25:18.398088   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing SSH key tar header
	I1028 17:25:18.398095   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.398018   32379 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.398114   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02
	I1028 17:25:18.398136   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:25:18.398156   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 (perms=drwx------)
	I1028 17:25:18.398166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.398180   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:25:18.398187   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:25:18.398194   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:25:18.398201   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home
	I1028 17:25:18.398207   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Skipping /home - not owner
	I1028 17:25:18.398217   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:25:18.398254   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:25:18.398277   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:25:18.398289   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:25:18.398304   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:25:18.398338   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:18.399119   32020 main.go:141] libmachine: (ha-381619-m02) define libvirt domain using xml: 
	I1028 17:25:18.399128   32020 main.go:141] libmachine: (ha-381619-m02) <domain type='kvm'>
	I1028 17:25:18.399133   32020 main.go:141] libmachine: (ha-381619-m02)   <name>ha-381619-m02</name>
	I1028 17:25:18.399138   32020 main.go:141] libmachine: (ha-381619-m02)   <memory unit='MiB'>2200</memory>
	I1028 17:25:18.399142   32020 main.go:141] libmachine: (ha-381619-m02)   <vcpu>2</vcpu>
	I1028 17:25:18.399146   32020 main.go:141] libmachine: (ha-381619-m02)   <features>
	I1028 17:25:18.399154   32020 main.go:141] libmachine: (ha-381619-m02)     <acpi/>
	I1028 17:25:18.399160   32020 main.go:141] libmachine: (ha-381619-m02)     <apic/>
	I1028 17:25:18.399167   32020 main.go:141] libmachine: (ha-381619-m02)     <pae/>
	I1028 17:25:18.399171   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399177   32020 main.go:141] libmachine: (ha-381619-m02)   </features>
	I1028 17:25:18.399183   32020 main.go:141] libmachine: (ha-381619-m02)   <cpu mode='host-passthrough'>
	I1028 17:25:18.399188   32020 main.go:141] libmachine: (ha-381619-m02)   
	I1028 17:25:18.399194   32020 main.go:141] libmachine: (ha-381619-m02)   </cpu>
	I1028 17:25:18.399199   32020 main.go:141] libmachine: (ha-381619-m02)   <os>
	I1028 17:25:18.399206   32020 main.go:141] libmachine: (ha-381619-m02)     <type>hvm</type>
	I1028 17:25:18.399211   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='cdrom'/>
	I1028 17:25:18.399223   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='hd'/>
	I1028 17:25:18.399234   32020 main.go:141] libmachine: (ha-381619-m02)     <bootmenu enable='no'/>
	I1028 17:25:18.399255   32020 main.go:141] libmachine: (ha-381619-m02)   </os>
	I1028 17:25:18.399268   32020 main.go:141] libmachine: (ha-381619-m02)   <devices>
	I1028 17:25:18.399274   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='cdrom'>
	I1028 17:25:18.399282   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/boot2docker.iso'/>
	I1028 17:25:18.399289   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hdc' bus='scsi'/>
	I1028 17:25:18.399293   32020 main.go:141] libmachine: (ha-381619-m02)       <readonly/>
	I1028 17:25:18.399299   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399305   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='disk'>
	I1028 17:25:18.399316   32020 main.go:141] libmachine: (ha-381619-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:25:18.399348   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk'/>
	I1028 17:25:18.399365   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hda' bus='virtio'/>
	I1028 17:25:18.399403   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399425   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399439   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='mk-ha-381619'/>
	I1028 17:25:18.399446   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399454   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399464   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399473   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='default'/>
	I1028 17:25:18.399483   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399491   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399505   32020 main.go:141] libmachine: (ha-381619-m02)     <serial type='pty'>
	I1028 17:25:18.399516   32020 main.go:141] libmachine: (ha-381619-m02)       <target port='0'/>
	I1028 17:25:18.399525   32020 main.go:141] libmachine: (ha-381619-m02)     </serial>
	I1028 17:25:18.399531   32020 main.go:141] libmachine: (ha-381619-m02)     <console type='pty'>
	I1028 17:25:18.399536   32020 main.go:141] libmachine: (ha-381619-m02)       <target type='serial' port='0'/>
	I1028 17:25:18.399544   32020 main.go:141] libmachine: (ha-381619-m02)     </console>
	I1028 17:25:18.399554   32020 main.go:141] libmachine: (ha-381619-m02)     <rng model='virtio'>
	I1028 17:25:18.399564   32020 main.go:141] libmachine: (ha-381619-m02)       <backend model='random'>/dev/random</backend>
	I1028 17:25:18.399578   32020 main.go:141] libmachine: (ha-381619-m02)     </rng>
	I1028 17:25:18.399588   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399596   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399604   32020 main.go:141] libmachine: (ha-381619-m02)   </devices>
	I1028 17:25:18.399613   32020 main.go:141] libmachine: (ha-381619-m02) </domain>
	I1028 17:25:18.399622   32020 main.go:141] libmachine: (ha-381619-m02) 
	I1028 17:25:18.405867   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:26:9b:68 in network default
	I1028 17:25:18.406379   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring networks are active...
	I1028 17:25:18.406395   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:18.407090   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network default is active
	I1028 17:25:18.407385   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network mk-ha-381619 is active
	I1028 17:25:18.407717   32020 main.go:141] libmachine: (ha-381619-m02) Getting domain xml...
	I1028 17:25:18.408378   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:19.597563   32020 main.go:141] libmachine: (ha-381619-m02) Waiting to get IP...
	I1028 17:25:19.598384   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.598740   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.598789   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.598740   32379 retry.go:31] will retry after 190.903064ms: waiting for machine to come up
	I1028 17:25:19.791078   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.791557   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.791589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.791498   32379 retry.go:31] will retry after 306.415198ms: waiting for machine to come up
	I1028 17:25:20.099990   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.100410   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.100438   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.100363   32379 retry.go:31] will retry after 461.052427ms: waiting for machine to come up
	I1028 17:25:20.562787   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.563226   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.563254   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.563181   32379 retry.go:31] will retry after 399.454176ms: waiting for machine to come up
	I1028 17:25:20.964734   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.965138   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.965168   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.965088   32379 retry.go:31] will retry after 468.537228ms: waiting for machine to come up
	I1028 17:25:21.435633   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:21.436036   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:21.436065   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:21.435978   32379 retry.go:31] will retry after 901.623232ms: waiting for machine to come up
	I1028 17:25:22.338882   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:22.339214   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:22.339251   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:22.339170   32379 retry.go:31] will retry after 1.174231376s: waiting for machine to come up
	I1028 17:25:23.514567   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:23.515122   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:23.515148   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:23.515075   32379 retry.go:31] will retry after 1.47285995s: waiting for machine to come up
	I1028 17:25:24.989376   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:24.989742   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:24.989772   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:24.989693   32379 retry.go:31] will retry after 1.395202662s: waiting for machine to come up
	I1028 17:25:26.387051   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:26.387470   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:26.387497   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:26.387419   32379 retry.go:31] will retry after 1.648219706s: waiting for machine to come up
	I1028 17:25:28.036842   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:28.037349   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:28.037375   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:28.037295   32379 retry.go:31] will retry after 2.189322328s: waiting for machine to come up
	I1028 17:25:30.229493   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:30.229820   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:30.229841   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:30.229780   32379 retry.go:31] will retry after 2.90274213s: waiting for machine to come up
	I1028 17:25:33.134730   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:33.135076   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:33.135092   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:33.135034   32379 retry.go:31] will retry after 4.079584337s: waiting for machine to come up
	I1028 17:25:37.219140   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:37.219485   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:37.219505   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:37.219442   32379 retry.go:31] will retry after 4.856708442s: waiting for machine to come up
	I1028 17:25:42.077346   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077745   32020 main.go:141] libmachine: (ha-381619-m02) Found IP for machine: 192.168.39.171
	I1028 17:25:42.077766   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has current primary IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077785   32020 main.go:141] libmachine: (ha-381619-m02) Reserving static IP address...
	I1028 17:25:42.078069   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "ha-381619-m02", mac: "52:54:00:ab:1d:c9", ip: "192.168.39.171"} in network mk-ha-381619
	I1028 17:25:42.145216   32020 main.go:141] libmachine: (ha-381619-m02) Reserved static IP address: 192.168.39.171
	I1028 17:25:42.145248   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:42.145256   32020 main.go:141] libmachine: (ha-381619-m02) Waiting for SSH to be available...
	I1028 17:25:42.147449   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.147844   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619
	I1028 17:25:42.147868   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:ab:1d:c9
	I1028 17:25:42.148011   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:42.148037   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:42.148079   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:42.148093   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:42.148106   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:42.151405   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:25:42.151422   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:25:42.151430   32020 main.go:141] libmachine: (ha-381619-m02) DBG | command : exit 0
	I1028 17:25:42.151434   32020 main.go:141] libmachine: (ha-381619-m02) DBG | err     : exit status 255
	I1028 17:25:42.151457   32020 main.go:141] libmachine: (ha-381619-m02) DBG | output  : 
	I1028 17:25:45.153548   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:45.155666   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156001   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.156026   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156153   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:45.156174   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:45.156209   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:45.156220   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:45.156228   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:45.284123   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 17:25:45.284412   32020 main.go:141] libmachine: (ha-381619-m02) KVM machine creation complete!
	I1028 17:25:45.284721   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:45.285293   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285476   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285636   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:25:45.285651   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetState
	I1028 17:25:45.286839   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:25:45.286853   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:25:45.286874   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:25:45.286883   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.289343   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289699   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.289732   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289877   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.290050   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290180   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290283   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.290450   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.290659   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.290673   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:25:45.403429   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.403453   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:25:45.403460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.406169   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406520   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.406547   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406664   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.406833   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.406968   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.407121   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.407274   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.407471   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.407486   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:25:45.516915   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:25:45.516972   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:25:45.516982   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:25:45.516996   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517247   32020 buildroot.go:166] provisioning hostname "ha-381619-m02"
	I1028 17:25:45.517269   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.520442   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.520895   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.520951   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.521136   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.521306   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521441   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521550   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.521679   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.521869   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.521885   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m02 && echo "ha-381619-m02" | sudo tee /etc/hostname
	I1028 17:25:45.647896   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m02
	
	I1028 17:25:45.647923   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.650559   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.650915   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.650946   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.651119   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.651299   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651606   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.651778   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.651948   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.651967   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:25:45.773264   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.773293   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:25:45.773315   32020 buildroot.go:174] setting up certificates
	I1028 17:25:45.773322   32020 provision.go:84] configureAuth start
	I1028 17:25:45.773330   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.773552   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:45.776602   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.776920   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.776944   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.777092   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.779167   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779415   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.779440   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779566   32020 provision.go:143] copyHostCerts
	I1028 17:25:45.779590   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779620   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:25:45.779629   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779712   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:25:45.779784   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779808   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:25:45.779815   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779839   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:25:45.779883   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779899   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:25:45.779905   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779925   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:25:45.779969   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m02 san=[127.0.0.1 192.168.39.171 ha-381619-m02 localhost minikube]
	I1028 17:25:45.949948   32020 provision.go:177] copyRemoteCerts
	I1028 17:25:45.950001   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:25:45.950022   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.952596   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.952955   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.953006   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.953158   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.953335   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.953473   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.953584   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.038279   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:25:46.038337   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:25:46.061947   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:25:46.062008   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:25:46.084393   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:25:46.084451   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:25:46.107114   32020 provision.go:87] duration metric: took 333.781683ms to configureAuth
	I1028 17:25:46.107142   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:25:46.107303   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:46.107385   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.110324   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110650   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.110678   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110841   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.111029   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111171   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111337   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.111521   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.111668   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.111682   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:25:46.333665   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:25:46.333687   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:25:46.333695   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetURL
	I1028 17:25:46.335063   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using libvirt version 6000000
	I1028 17:25:46.337491   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.337821   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.337850   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.338022   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:25:46.338038   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:25:46.338046   32020 client.go:171] duration metric: took 28.302974924s to LocalClient.Create
	I1028 17:25:46.338089   32020 start.go:167] duration metric: took 28.303046594s to libmachine.API.Create "ha-381619"
	I1028 17:25:46.338103   32020 start.go:293] postStartSetup for "ha-381619-m02" (driver="kvm2")
	I1028 17:25:46.338115   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:25:46.338137   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.338375   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:25:46.338401   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.340858   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341271   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.341298   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.341568   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.341713   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.341825   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.426689   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:25:46.431014   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:25:46.431038   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:25:46.431111   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:25:46.431208   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:25:46.431224   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:25:46.431391   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:25:46.440073   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:46.463120   32020 start.go:296] duration metric: took 125.005816ms for postStartSetup
	I1028 17:25:46.463168   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:46.463762   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.466198   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466494   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.466531   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466725   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:46.466921   32020 start.go:128] duration metric: took 28.448963909s to createHost
	I1028 17:25:46.466949   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.469249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469565   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.469589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469704   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.469861   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.469984   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.470143   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.470307   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.470485   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.470498   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:25:46.580856   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136346.562587281
	
	I1028 17:25:46.580878   32020 fix.go:216] guest clock: 1730136346.562587281
	I1028 17:25:46.580887   32020 fix.go:229] Guest: 2024-10-28 17:25:46.562587281 +0000 UTC Remote: 2024-10-28 17:25:46.466934782 +0000 UTC m=+73.797903078 (delta=95.652499ms)
	I1028 17:25:46.580901   32020 fix.go:200] guest clock delta is within tolerance: 95.652499ms
	I1028 17:25:46.580907   32020 start.go:83] releasing machines lock for "ha-381619-m02", held for 28.563026837s
	I1028 17:25:46.580924   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.581186   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.583856   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.584218   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.584249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.586494   32020 out.go:177] * Found network options:
	I1028 17:25:46.587894   32020 out.go:177]   - NO_PROXY=192.168.39.230
	W1028 17:25:46.589029   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589070   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589532   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589694   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589788   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:25:46.589827   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	W1028 17:25:46.589854   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589924   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:25:46.589942   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.592456   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592681   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592853   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.592873   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592998   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593129   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.593189   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.593257   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593327   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593495   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593488   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.593663   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593796   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.834104   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:25:46.840249   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:25:46.840309   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:25:46.857442   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:25:46.857462   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:25:46.857520   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:25:46.874062   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:25:46.887622   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:25:46.887678   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:25:46.901054   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:25:46.914614   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:25:47.030203   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:25:47.173397   32020 docker.go:233] disabling docker service ...
	I1028 17:25:47.173471   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:25:47.187602   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:25:47.200124   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:25:47.343002   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:25:47.463446   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:25:47.477391   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:25:47.495284   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:25:47.495336   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.505232   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:25:47.505290   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.515205   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.524903   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.534665   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:25:47.544548   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.554185   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.570492   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.580150   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:25:47.588959   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:25:47.588998   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:25:47.602144   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:25:47.611274   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:47.728237   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:25:47.819661   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:25:47.819739   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:25:47.825086   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:25:47.825133   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:25:47.828919   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:25:47.865608   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:25:47.865686   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.891971   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.920487   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:25:47.921941   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:25:47.923245   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:47.926002   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926296   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:47.926314   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926539   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:25:47.930572   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:47.943132   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:25:47.943291   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:47.943533   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.943566   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.957947   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I1028 17:25:47.958254   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.958709   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.958727   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.959022   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.959199   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:47.960488   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:47.960756   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.960791   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.974636   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I1028 17:25:47.975037   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.975478   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.975496   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.975773   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.975952   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:47.976140   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.171
	I1028 17:25:47.976153   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:47.976170   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:47.976307   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:47.976364   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:47.976377   32020 certs.go:256] generating profile certs ...
	I1028 17:25:47.976489   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:47.976518   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6
	I1028 17:25:47.976537   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.254]
	I1028 17:25:48.173298   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 ...
	I1028 17:25:48.173326   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6: {Name:mkf5ce350ef4737e80e11fe080b891074a0af9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173482   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 ...
	I1028 17:25:48.173493   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6: {Name:mk4892e87f7052cc8a58e00369d3170cecec3e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173560   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:48.173681   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:48.173810   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:48.173826   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:48.173840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:48.173854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:48.173866   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:48.173879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:48.173891   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:48.173902   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:48.173913   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:48.173957   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:48.173999   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:48.174009   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:48.174030   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:48.174051   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:48.174071   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:48.174117   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:48.174144   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.174158   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.174169   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.174198   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:48.177148   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177545   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:48.177579   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177737   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:48.177910   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:48.178048   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:48.178158   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:48.248817   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:25:48.254098   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:25:48.264499   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:25:48.268575   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:25:48.278929   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:25:48.283180   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:25:48.292856   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:25:48.296876   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:25:48.306132   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:25:48.310003   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:25:48.319418   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:25:48.323887   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:25:48.335408   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:48.360541   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:48.384095   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:48.407120   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:48.429601   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 17:25:48.452108   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:25:48.474717   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:48.497519   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:48.519884   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:48.542530   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:48.565246   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:48.587411   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:25:48.603353   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:25:48.618794   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:25:48.634198   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:25:48.649902   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:25:48.665540   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:25:48.680907   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:25:48.697446   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:48.703204   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:48.713589   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718016   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718162   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.723740   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:48.734297   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:48.744539   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748653   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748709   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.754164   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:48.764209   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:48.774379   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778691   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778734   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.784288   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:48.794987   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:48.799006   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:48.799053   32020 kubeadm.go:934] updating node {m02 192.168.39.171 8443 v1.31.2 crio true true} ...
	I1028 17:25:48.799121   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:48.799142   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:48.799168   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:48.823470   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:48.823527   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:48.823569   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.835145   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:25:48.835188   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.844460   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:25:48.844491   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844545   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844552   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 17:25:48.844586   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 17:25:48.848931   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:25:48.848960   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:25:49.845765   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.845846   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.851022   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:25:49.851049   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:25:49.995196   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:25:50.018003   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.018112   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.028108   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:25:50.028154   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:25:50.413235   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:25:50.422462   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 17:25:50.439863   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:50.457114   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:25:50.474256   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:50.477946   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:50.489942   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:50.615829   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:50.634721   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:50.635033   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:50.635082   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:50.649391   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I1028 17:25:50.649767   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:50.650191   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:50.650209   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:50.650503   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:50.650660   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:50.650788   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:50.650874   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:25:50.650889   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:50.653655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654061   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:50.654087   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654224   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:50.654401   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:50.654535   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:50.654636   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:50.789658   32020 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:50.789699   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443"
	I1028 17:26:12.167714   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443": (21.377987897s)
	I1028 17:26:12.167759   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:26:12.604075   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m02 minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:26:12.730286   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:26:12.839048   32020 start.go:319] duration metric: took 22.188254958s to joinCluster
	I1028 17:26:12.839133   32020 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:12.839439   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:12.840330   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:26:12.841472   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:26:13.041048   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:26:13.058928   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:26:13.059251   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:26:13.059331   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:26:13.059574   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:13.059667   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.059677   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.059688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.059694   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.077343   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:26:13.560169   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.560188   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.560196   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.560200   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.573882   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:14.060794   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.060818   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.060828   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.060835   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.068335   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:14.560535   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.560554   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.560562   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.560567   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.564008   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:15.060016   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.060055   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.060066   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.060072   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.064096   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:15.064637   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:15.559999   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.560030   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.560041   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.560046   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.563431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.059828   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.059852   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.059862   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.059867   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.063732   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.560697   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.560722   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.560733   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.560739   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.564261   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:17.060671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.060698   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.060711   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.060718   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.064995   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:17.066041   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:17.560713   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.560732   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.560749   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.563531   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:18.060093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.060116   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.060127   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.060135   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.064122   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:18.559857   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.559879   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.559887   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.559898   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.563832   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:19.059842   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.059867   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.059879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.059884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.065030   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:19.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.559871   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.559879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.559884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.562800   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:19.563587   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:20.059873   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.059895   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.059905   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.059912   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.073315   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:20.560212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.560231   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.560239   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.560243   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.650492   32020 round_trippers.go:574] Response Status: 200 OK in 90 milliseconds
	I1028 17:26:21.059937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.059963   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.059974   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.059979   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.064508   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:21.560559   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.560581   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.560590   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.560594   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.563714   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:21.564443   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:22.059724   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.059744   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.059752   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.059757   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.063391   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:22.560710   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.560731   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.560738   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.563846   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.060524   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.060544   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.060554   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.060561   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.064448   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.560417   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.560438   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.560447   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.560451   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.563535   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.060636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.060664   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.060675   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.060683   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.064043   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.064451   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:24.559868   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.559899   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.559907   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.559910   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.562925   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:25.059880   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.059902   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.059910   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.059915   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.063972   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:25.559872   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.559894   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.559901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.559905   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.563081   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:26.060748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.060770   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.060782   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.060788   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.064990   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:26.065576   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:26.559841   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.559863   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.559871   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.559876   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.562740   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:27.059746   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.059768   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.059775   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.059779   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.063135   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:27.560126   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.560145   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.560153   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.560158   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.563096   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:28.060723   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.060746   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.060757   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.060763   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.065003   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:28.560732   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.560757   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.560767   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.560774   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.563965   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:28.564617   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:29.059876   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.059903   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.059914   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.059919   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.067282   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:29.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.559872   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.559880   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.559883   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.562804   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:30.059831   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.059853   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.059867   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.059875   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.063855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:30.560631   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.560653   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.560665   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.560670   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.563630   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:31.059907   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.059925   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.059933   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.059938   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.064319   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:31.065078   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:31.560248   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.560271   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.560278   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.560282   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.563146   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:32.059755   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.059779   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.059790   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.059796   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.065145   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:32.560006   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.560026   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.560034   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.560038   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.563453   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.060614   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.060633   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.060641   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.060647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.064544   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.066373   32020 node_ready.go:49] node "ha-381619-m02" has status "Ready":"True"
	I1028 17:26:33.066389   32020 node_ready.go:38] duration metric: took 20.006796944s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:33.066397   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:33.066462   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:33.066470   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.066477   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.066482   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.074203   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:33.082515   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.082586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:26:33.082595   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.082602   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.082607   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.095144   32020 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 17:26:33.095832   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.095846   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.095854   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.095858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.101134   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:33.101733   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.101757   32020 pod_ready.go:82] duration metric: took 19.21928ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101770   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101833   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:26:33.101844   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.101853   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.101858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.105945   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.108337   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.108355   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.108367   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.108372   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.113026   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.113662   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.113683   32020 pod_ready.go:82] duration metric: took 11.906137ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113694   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113752   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:26:33.113762   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.113774   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.113782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.123002   32020 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 17:26:33.123632   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.123647   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.123654   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.123658   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.127965   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.128570   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.128593   32020 pod_ready.go:82] duration metric: took 14.890353ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128604   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128669   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:26:33.128680   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.128690   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.128695   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.132736   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.133266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.133282   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.133291   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.133297   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.135365   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.135735   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.135750   32020 pod_ready.go:82] duration metric: took 7.136636ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.135762   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.261122   32020 request.go:632] Waited for 125.293136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261209   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261217   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.261226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.261234   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.263967   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.461031   32020 request.go:632] Waited for 196.380501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461114   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461126   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.461137   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.461148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.465245   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.465839   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.465854   32020 pod_ready.go:82] duration metric: took 330.085581ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.465863   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.661130   32020 request.go:632] Waited for 195.210858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661218   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.661226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.661231   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.664592   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.861613   32020 request.go:632] Waited for 196.398754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861693   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.861703   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.861708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.865300   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.865923   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.865943   32020 pod_ready.go:82] duration metric: took 400.074085ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.865954   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.061082   32020 request.go:632] Waited for 195.035949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061146   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061154   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.061164   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.061177   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.065243   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:34.261295   32020 request.go:632] Waited for 195.377372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261362   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261369   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.261377   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.261384   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.264122   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:34.264806   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.264824   32020 pod_ready.go:82] duration metric: took 398.860925ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.264834   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.461015   32020 request.go:632] Waited for 196.107238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461086   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461092   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.461099   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.461107   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.464532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.661679   32020 request.go:632] Waited for 196.369344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661755   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.661763   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.661769   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.664905   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.665450   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.665471   32020 pod_ready.go:82] duration metric: took 400.628457ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.665485   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.861555   32020 request.go:632] Waited for 195.998426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861607   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861612   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.861619   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.861625   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.865054   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.061002   32020 request.go:632] Waited for 195.260133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061074   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061081   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.061090   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.061103   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.067316   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:35.067855   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.067872   32020 pod_ready.go:82] duration metric: took 402.381503ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.067883   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.261021   32020 request.go:632] Waited for 193.06469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261075   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261080   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.261087   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.261091   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.264532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.461647   32020 request.go:632] Waited for 196.379594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461699   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461704   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.461712   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.461716   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.464708   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:35.465310   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.465326   32020 pod_ready.go:82] duration metric: took 397.438256ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.465336   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.660832   32020 request.go:632] Waited for 195.429914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660887   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660892   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.660901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.660906   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.664825   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.861091   32020 request.go:632] Waited for 195.400527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861176   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861185   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.861193   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.861199   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.864874   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.865496   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.865512   32020 pod_ready.go:82] duration metric: took 400.170514ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.865524   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.061640   32020 request.go:632] Waited for 196.040174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061702   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.061709   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.061712   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.067912   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:36.260741   32020 request.go:632] Waited for 192.270672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260796   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260801   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.260808   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.260811   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.264431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.265062   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:36.265078   32020 pod_ready.go:82] duration metric: took 399.548106ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.265089   32020 pod_ready.go:39] duration metric: took 3.19868237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:36.265105   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:26:36.265162   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:26:36.280395   32020 api_server.go:72] duration metric: took 23.441229274s to wait for apiserver process to appear ...
	I1028 17:26:36.280422   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:26:36.280444   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:26:36.284951   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:26:36.285015   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:26:36.285023   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.285030   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.285034   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.285954   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:26:36.286036   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:26:36.286049   32020 api_server.go:131] duration metric: took 5.621129ms to wait for apiserver health ...
	I1028 17:26:36.286055   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:26:36.461480   32020 request.go:632] Waited for 175.36266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461560   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461566   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.461573   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.461579   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.465870   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.471332   32020 system_pods.go:59] 17 kube-system pods found
	I1028 17:26:36.471364   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.471372   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.471378   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.471384   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.471389   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.471394   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.471398   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.471404   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.471410   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.471415   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.471420   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.471423   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.471427   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.471431   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.471439   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.471443   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.471447   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.471452   32020 system_pods.go:74] duration metric: took 185.392371ms to wait for pod list to return data ...
	I1028 17:26:36.471461   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:26:36.660798   32020 request.go:632] Waited for 189.265217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660858   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660865   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.660876   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.660890   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.664250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.664492   32020 default_sa.go:45] found service account: "default"
	I1028 17:26:36.664512   32020 default_sa.go:55] duration metric: took 193.044588ms for default service account to be created ...
	I1028 17:26:36.664525   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:26:36.860686   32020 request.go:632] Waited for 196.070222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860774   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860785   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.860796   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.860806   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.865017   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.869263   32020 system_pods.go:86] 17 kube-system pods found
	I1028 17:26:36.869283   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.869289   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.869294   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.869300   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.869305   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.869318   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.869324   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.869332   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.869341   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.869344   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.869348   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.869351   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.869355   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.869359   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.869362   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.869368   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.869371   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.869378   32020 system_pods.go:126] duration metric: took 204.847439ms to wait for k8s-apps to be running ...
	I1028 17:26:36.869387   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:26:36.869438   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:26:36.887558   32020 system_svc.go:56] duration metric: took 18.164041ms WaitForService to wait for kubelet
	I1028 17:26:36.887583   32020 kubeadm.go:582] duration metric: took 24.048418465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:26:36.887603   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:26:37.061041   32020 request.go:632] Waited for 173.358173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061125   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061137   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:37.061147   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:37.061157   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:37.065908   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:37.066717   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066739   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066750   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066754   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066758   32020 node_conditions.go:105] duration metric: took 179.146781ms to run NodePressure ...
	I1028 17:26:37.066780   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:26:37.066813   32020 start.go:255] writing updated cluster config ...
	I1028 17:26:37.068764   32020 out.go:201] 
	I1028 17:26:37.070024   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:37.070105   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.071682   32020 out.go:177] * Starting "ha-381619-m03" control-plane node in "ha-381619" cluster
	I1028 17:26:37.072951   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:26:37.072974   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:26:37.073061   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:26:37.073071   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:26:37.073157   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.073328   32020 start.go:360] acquireMachinesLock for ha-381619-m03: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:26:37.073367   32020 start.go:364] duration metric: took 22.448µs to acquireMachinesLock for "ha-381619-m03"
	I1028 17:26:37.073383   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:37.073468   32020 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 17:26:37.074992   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:26:37.075063   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:26:37.075098   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:26:37.089635   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I1028 17:26:37.090045   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:26:37.090591   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:26:37.090617   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:26:37.090932   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:26:37.091131   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:26:37.091290   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:26:37.091438   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:26:37.091470   32020 client.go:168] LocalClient.Create starting
	I1028 17:26:37.091506   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:26:37.091543   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091562   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091624   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:26:37.091649   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091665   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091691   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:26:37.091702   32020 main.go:141] libmachine: (ha-381619-m03) Calling .PreCreateCheck
	I1028 17:26:37.091853   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:26:37.092216   32020 main.go:141] libmachine: Creating machine...
	I1028 17:26:37.092231   32020 main.go:141] libmachine: (ha-381619-m03) Calling .Create
	I1028 17:26:37.092346   32020 main.go:141] libmachine: (ha-381619-m03) Creating KVM machine...
	I1028 17:26:37.093689   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing default KVM network
	I1028 17:26:37.093825   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing private KVM network mk-ha-381619
	I1028 17:26:37.094015   32020 main.go:141] libmachine: (ha-381619-m03) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.094041   32020 main.go:141] libmachine: (ha-381619-m03) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:26:37.094128   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.093979   32807 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.094183   32020 main.go:141] libmachine: (ha-381619-m03) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:26:37.334476   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.334350   32807 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa...
	I1028 17:26:37.512343   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512238   32807 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk...
	I1028 17:26:37.512368   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing magic tar header
	I1028 17:26:37.512408   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing SSH key tar header
	I1028 17:26:37.512432   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512349   32807 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.512450   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03
	I1028 17:26:37.512458   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:26:37.512478   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.512486   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:26:37.512517   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:26:37.512536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:26:37.512545   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 (perms=drwx------)
	I1028 17:26:37.512553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home
	I1028 17:26:37.512565   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:26:37.512581   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:26:37.512594   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:26:37.512609   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:26:37.512619   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Skipping /home - not owner
	I1028 17:26:37.512629   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:26:37.512638   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:37.513512   32020 main.go:141] libmachine: (ha-381619-m03) define libvirt domain using xml: 
	I1028 17:26:37.513530   32020 main.go:141] libmachine: (ha-381619-m03) <domain type='kvm'>
	I1028 17:26:37.513546   32020 main.go:141] libmachine: (ha-381619-m03)   <name>ha-381619-m03</name>
	I1028 17:26:37.513552   32020 main.go:141] libmachine: (ha-381619-m03)   <memory unit='MiB'>2200</memory>
	I1028 17:26:37.513557   32020 main.go:141] libmachine: (ha-381619-m03)   <vcpu>2</vcpu>
	I1028 17:26:37.513561   32020 main.go:141] libmachine: (ha-381619-m03)   <features>
	I1028 17:26:37.513566   32020 main.go:141] libmachine: (ha-381619-m03)     <acpi/>
	I1028 17:26:37.513572   32020 main.go:141] libmachine: (ha-381619-m03)     <apic/>
	I1028 17:26:37.513577   32020 main.go:141] libmachine: (ha-381619-m03)     <pae/>
	I1028 17:26:37.513584   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513589   32020 main.go:141] libmachine: (ha-381619-m03)   </features>
	I1028 17:26:37.513595   32020 main.go:141] libmachine: (ha-381619-m03)   <cpu mode='host-passthrough'>
	I1028 17:26:37.513600   32020 main.go:141] libmachine: (ha-381619-m03)   
	I1028 17:26:37.513606   32020 main.go:141] libmachine: (ha-381619-m03)   </cpu>
	I1028 17:26:37.513611   32020 main.go:141] libmachine: (ha-381619-m03)   <os>
	I1028 17:26:37.513617   32020 main.go:141] libmachine: (ha-381619-m03)     <type>hvm</type>
	I1028 17:26:37.513622   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='cdrom'/>
	I1028 17:26:37.513630   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='hd'/>
	I1028 17:26:37.513634   32020 main.go:141] libmachine: (ha-381619-m03)     <bootmenu enable='no'/>
	I1028 17:26:37.513638   32020 main.go:141] libmachine: (ha-381619-m03)   </os>
	I1028 17:26:37.513643   32020 main.go:141] libmachine: (ha-381619-m03)   <devices>
	I1028 17:26:37.513647   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='cdrom'>
	I1028 17:26:37.513655   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/boot2docker.iso'/>
	I1028 17:26:37.513660   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hdc' bus='scsi'/>
	I1028 17:26:37.513664   32020 main.go:141] libmachine: (ha-381619-m03)       <readonly/>
	I1028 17:26:37.513668   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513673   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='disk'>
	I1028 17:26:37.513679   32020 main.go:141] libmachine: (ha-381619-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:26:37.513689   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk'/>
	I1028 17:26:37.513697   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hda' bus='virtio'/>
	I1028 17:26:37.513728   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513752   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513762   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='mk-ha-381619'/>
	I1028 17:26:37.513777   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513799   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513818   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513832   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='default'/>
	I1028 17:26:37.513842   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513850   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513860   32020 main.go:141] libmachine: (ha-381619-m03)     <serial type='pty'>
	I1028 17:26:37.513868   32020 main.go:141] libmachine: (ha-381619-m03)       <target port='0'/>
	I1028 17:26:37.513877   32020 main.go:141] libmachine: (ha-381619-m03)     </serial>
	I1028 17:26:37.513888   32020 main.go:141] libmachine: (ha-381619-m03)     <console type='pty'>
	I1028 17:26:37.513899   32020 main.go:141] libmachine: (ha-381619-m03)       <target type='serial' port='0'/>
	I1028 17:26:37.513908   32020 main.go:141] libmachine: (ha-381619-m03)     </console>
	I1028 17:26:37.513919   32020 main.go:141] libmachine: (ha-381619-m03)     <rng model='virtio'>
	I1028 17:26:37.513932   32020 main.go:141] libmachine: (ha-381619-m03)       <backend model='random'>/dev/random</backend>
	I1028 17:26:37.513941   32020 main.go:141] libmachine: (ha-381619-m03)     </rng>
	I1028 17:26:37.513954   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513965   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513971   32020 main.go:141] libmachine: (ha-381619-m03)   </devices>
	I1028 17:26:37.513978   32020 main.go:141] libmachine: (ha-381619-m03) </domain>
	I1028 17:26:37.513992   32020 main.go:141] libmachine: (ha-381619-m03) 
	I1028 17:26:37.520796   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:6b:b8:f1 in network default
	I1028 17:26:37.521360   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring networks are active...
	I1028 17:26:37.521387   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:37.521985   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network default is active
	I1028 17:26:37.522251   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network mk-ha-381619 is active
	I1028 17:26:37.522562   32020 main.go:141] libmachine: (ha-381619-m03) Getting domain xml...
	I1028 17:26:37.523108   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:38.733507   32020 main.go:141] libmachine: (ha-381619-m03) Waiting to get IP...
	I1028 17:26:38.734445   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:38.734847   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:38.734874   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:38.734831   32807 retry.go:31] will retry after 277.511241ms: waiting for machine to come up
	I1028 17:26:39.014311   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.014705   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.014731   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.014657   32807 retry.go:31] will retry after 249.568431ms: waiting for machine to come up
	I1028 17:26:39.266003   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.266417   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.266438   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.266379   32807 retry.go:31] will retry after 332.313659ms: waiting for machine to come up
	I1028 17:26:39.599811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.600199   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.600224   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.600155   32807 retry.go:31] will retry after 498.320063ms: waiting for machine to come up
	I1028 17:26:40.099601   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.100068   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.100102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.100010   32807 retry.go:31] will retry after 620.508522ms: waiting for machine to come up
	I1028 17:26:40.721631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.722075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.722102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.722032   32807 retry.go:31] will retry after 786.320854ms: waiting for machine to come up
	I1028 17:26:41.509664   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:41.510180   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:41.510208   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:41.510141   32807 retry.go:31] will retry after 1.021116287s: waiting for machine to come up
	I1028 17:26:42.532494   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:42.532913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:42.532943   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:42.532860   32807 retry.go:31] will retry after 1.335656065s: waiting for machine to come up
	I1028 17:26:43.870447   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:43.870913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:43.870940   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:43.870865   32807 retry.go:31] will retry after 1.720265412s: waiting for machine to come up
	I1028 17:26:45.593694   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:45.594300   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:45.594326   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:45.594243   32807 retry.go:31] will retry after 1.629048478s: waiting for machine to come up
	I1028 17:26:47.224808   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:47.225182   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:47.225207   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:47.225159   32807 retry.go:31] will retry after 2.592881751s: waiting for machine to come up
	I1028 17:26:49.819232   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:49.819722   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:49.819742   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:49.819691   32807 retry.go:31] will retry after 2.406064511s: waiting for machine to come up
	I1028 17:26:52.227365   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:52.227723   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:52.227744   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:52.227706   32807 retry.go:31] will retry after 4.047640597s: waiting for machine to come up
	I1028 17:26:56.276662   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:56.277135   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:56.277158   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:56.277104   32807 retry.go:31] will retry after 4.243512083s: waiting for machine to come up
	I1028 17:27:00.523220   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523671   32020 main.go:141] libmachine: (ha-381619-m03) Found IP for machine: 192.168.39.17
	I1028 17:27:00.523698   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has current primary IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523706   32020 main.go:141] libmachine: (ha-381619-m03) Reserving static IP address...
	I1028 17:27:00.524025   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "ha-381619-m03", mac: "52:54:00:d7:8c:62", ip: "192.168.39.17"} in network mk-ha-381619
	I1028 17:27:00.592781   32020 main.go:141] libmachine: (ha-381619-m03) Reserved static IP address: 192.168.39.17
	I1028 17:27:00.592808   32020 main.go:141] libmachine: (ha-381619-m03) Waiting for SSH to be available...
	I1028 17:27:00.592817   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:00.595728   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.595996   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619
	I1028 17:27:00.596032   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:d7:8c:62
	I1028 17:27:00.596173   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:00.596195   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:00.596242   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:00.596266   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:00.596292   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:00.599869   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:27:00.599886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:27:00.599893   32020 main.go:141] libmachine: (ha-381619-m03) DBG | command : exit 0
	I1028 17:27:00.599897   32020 main.go:141] libmachine: (ha-381619-m03) DBG | err     : exit status 255
	I1028 17:27:00.599912   32020 main.go:141] libmachine: (ha-381619-m03) DBG | output  : 
	I1028 17:27:03.600719   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:03.602993   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603307   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.603342   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603475   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:03.603507   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:03.603540   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:03.603558   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:03.603573   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:03.732419   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 17:27:03.732661   32020 main.go:141] libmachine: (ha-381619-m03) KVM machine creation complete!
	I1028 17:27:03.732966   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:03.733514   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733669   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733799   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:27:03.733816   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetState
	I1028 17:27:03.734895   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:27:03.734910   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:27:03.734928   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:27:03.734939   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.737530   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.737905   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.737933   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.738103   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.738238   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738419   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738528   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.738669   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.738865   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.738879   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:27:03.843630   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:03.843655   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:27:03.843666   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.846510   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.846865   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.846886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.847091   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.847261   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847412   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847510   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.847671   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.847870   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.847884   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:27:03.953430   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:27:03.953486   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:27:03.953497   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:27:03.953508   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.953779   32020 buildroot.go:166] provisioning hostname "ha-381619-m03"
	I1028 17:27:03.953819   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.954012   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.956989   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957430   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.957456   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957613   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.957773   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.957930   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.958072   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.958232   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.958456   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.958476   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m03 && echo "ha-381619-m03" | sudo tee /etc/hostname
	I1028 17:27:04.082564   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m03
	
	I1028 17:27:04.082596   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.085190   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085543   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.085567   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.085952   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086175   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.086298   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.086473   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.086494   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:27:04.201141   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:04.201171   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:27:04.201191   32020 buildroot.go:174] setting up certificates
	I1028 17:27:04.201204   32020 provision.go:84] configureAuth start
	I1028 17:27:04.201213   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:04.201449   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.204201   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.204661   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204749   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.206751   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.207092   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207247   32020 provision.go:143] copyHostCerts
	I1028 17:27:04.207276   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207314   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:27:04.207334   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207429   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:27:04.207519   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207543   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:27:04.207552   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207589   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:27:04.207646   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207670   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:27:04.207679   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207710   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:27:04.207772   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m03 san=[127.0.0.1 192.168.39.17 ha-381619-m03 localhost minikube]
	I1028 17:27:04.311071   32020 provision.go:177] copyRemoteCerts
	I1028 17:27:04.311121   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:27:04.311145   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.313577   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.313977   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.314019   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.314168   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.314347   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.314472   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.314623   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.403135   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:27:04.403211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:27:04.427834   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:27:04.427894   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:27:04.450833   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:27:04.450900   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:27:04.473452   32020 provision.go:87] duration metric: took 272.234677ms to configureAuth
	I1028 17:27:04.473476   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:27:04.473653   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:04.473713   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.476526   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.476861   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.476881   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.477065   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.477235   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477353   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477466   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.477631   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.477821   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.477837   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:27:04.708532   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:27:04.708562   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:27:04.708571   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetURL
	I1028 17:27:04.709704   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using libvirt version 6000000
	I1028 17:27:04.711553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.711850   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.711877   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.712051   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:27:04.712065   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:27:04.712074   32020 client.go:171] duration metric: took 27.620592933s to LocalClient.Create
	I1028 17:27:04.712101   32020 start.go:167] duration metric: took 27.620663816s to libmachine.API.Create "ha-381619"
	I1028 17:27:04.712114   32020 start.go:293] postStartSetup for "ha-381619-m03" (driver="kvm2")
	I1028 17:27:04.712128   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:27:04.712149   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.712379   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:27:04.712408   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.714536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.714835   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.714862   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.715020   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.715209   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.715341   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.715464   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.799357   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:27:04.803701   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:27:04.803723   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:27:04.803779   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:27:04.803846   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:27:04.803856   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:27:04.803932   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:27:04.813520   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:04.836571   32020 start.go:296] duration metric: took 124.443928ms for postStartSetup
	I1028 17:27:04.836615   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:04.837172   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.839735   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840084   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.840105   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840305   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:27:04.840512   32020 start.go:128] duration metric: took 27.767033157s to createHost
	I1028 17:27:04.840535   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.842741   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.843096   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843188   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.843354   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843499   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843648   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.843814   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.843957   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.843967   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:27:04.948925   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136424.929789330
	
	I1028 17:27:04.948945   32020 fix.go:216] guest clock: 1730136424.929789330
	I1028 17:27:04.948951   32020 fix.go:229] Guest: 2024-10-28 17:27:04.92978933 +0000 UTC Remote: 2024-10-28 17:27:04.840524406 +0000 UTC m=+152.171492636 (delta=89.264924ms)
	I1028 17:27:04.948966   32020 fix.go:200] guest clock delta is within tolerance: 89.264924ms
	I1028 17:27:04.948971   32020 start.go:83] releasing machines lock for "ha-381619-m03", held for 27.875595959s
	I1028 17:27:04.948986   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.949230   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.952087   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.952552   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.952580   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.954772   32020 out.go:177] * Found network options:
	I1028 17:27:04.956124   32020 out.go:177]   - NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:04.957329   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957826   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957978   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.958075   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:27:04.958124   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.958183   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:27:04.958205   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.960811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961141   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961168   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961186   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961307   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961462   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.961599   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.961617   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961637   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961711   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.961806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961908   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.962057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.962208   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:05.194026   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:27:05.201042   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:27:05.201105   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:27:05.217646   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:27:05.217662   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:27:05.217711   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:27:05.236089   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:27:05.251712   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:27:05.251757   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:27:05.266922   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:27:05.282192   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:27:05.400766   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:27:05.540458   32020 docker.go:233] disabling docker service ...
	I1028 17:27:05.540536   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:27:05.554384   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:27:05.566632   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:27:05.704365   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:27:05.814298   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:27:05.832161   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:27:05.850391   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:27:05.850440   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.860158   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:27:05.860214   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.870182   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.880040   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.890188   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:27:05.901036   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.911295   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.928814   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.939099   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:27:05.949052   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:27:05.949107   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:27:05.961188   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:27:05.970308   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:06.082126   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:27:06.186312   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:27:06.186399   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:27:06.191449   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:27:06.191503   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:27:06.195251   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:27:06.231675   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:27:06.231743   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.263999   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.295360   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:27:06.296610   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:27:06.297916   32020 out.go:177]   - env NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:06.299066   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:06.302357   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.302805   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:06.302853   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.303125   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:27:06.307684   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:06.322487   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:27:06.322674   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:06.322921   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.322954   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.337329   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I1028 17:27:06.337793   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.338350   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.338369   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.338643   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.338806   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:27:06.340173   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:06.340491   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.340528   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.354028   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I1028 17:27:06.354441   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.354853   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.354871   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.355207   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.355398   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:06.355555   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.17
	I1028 17:27:06.355568   32020 certs.go:194] generating shared ca certs ...
	I1028 17:27:06.355587   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.355706   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:27:06.355743   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:27:06.355752   32020 certs.go:256] generating profile certs ...
	I1028 17:27:06.355818   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:27:06.355840   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131
	I1028 17:27:06.355854   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.17 192.168.39.254]
	I1028 17:27:06.615352   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 ...
	I1028 17:27:06.615384   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131: {Name:mk30b1e5a01615c193463ae31058813eb757a15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615571   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 ...
	I1028 17:27:06.615587   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131: {Name:mkc1142cb1e41a27aeb0597e6f743604179f8b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615684   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:27:06.615844   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:27:06.616012   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:27:06.616031   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:27:06.616048   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:27:06.616067   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:27:06.616091   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:27:06.616107   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:27:06.616121   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:27:06.616138   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:27:06.632549   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:27:06.632628   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:27:06.632669   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:27:06.632680   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:27:06.632702   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:27:06.632732   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:27:06.632764   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:27:06.632808   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:06.632854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:27:06.632879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:06.632897   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:27:06.632955   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:06.635620   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.635992   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:06.636039   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.636203   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:06.636373   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:06.636547   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:06.636691   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:06.708743   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:27:06.714395   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:27:06.725274   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:27:06.729452   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:27:06.739682   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:27:06.743778   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:27:06.753533   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:27:06.757406   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:27:06.768515   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:27:06.772684   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:27:06.783594   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:27:06.788182   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:27:06.798917   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:27:06.824680   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:27:06.848168   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:27:06.870934   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:27:06.894622   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 17:27:06.916995   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:27:06.939854   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:27:06.962079   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:27:06.985176   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:27:07.007959   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:27:07.031196   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:27:07.054116   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:27:07.071809   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:27:07.087821   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:27:07.105114   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:27:07.121456   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:27:07.137929   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:27:07.153936   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:27:07.169928   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:27:07.176125   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:27:07.186611   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191749   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191791   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.197474   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:27:07.208145   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:27:07.219642   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224041   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224081   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.229665   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:27:07.240477   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:27:07.251279   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255404   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255446   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.260823   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:27:07.271234   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:27:07.275094   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:27:07.275142   32020 kubeadm.go:934] updating node {m03 192.168.39.17 8443 v1.31.2 crio true true} ...
	I1028 17:27:07.275277   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:27:07.275318   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:27:07.275356   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:27:07.290975   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:27:07.291032   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:27:07.291070   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.301885   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:27:07.301930   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.312754   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 17:27:07.312779   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312836   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:27:07.312864   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 17:27:07.312926   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312927   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:07.317184   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:27:07.317211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:27:07.352999   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:27:07.353042   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:27:07.353044   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.353130   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.410351   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:27:07.410406   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:27:08.136367   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:27:08.145689   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 17:27:08.162514   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:27:08.178802   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:27:08.195002   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:27:08.198953   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:08.210803   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:08.352163   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:08.377094   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:08.377585   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:08.377645   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:08.394262   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I1028 17:27:08.394687   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:08.395242   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:08.395276   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:08.395635   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:08.395837   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:08.396078   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:27:08.396215   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:27:08.396230   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:08.399082   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399537   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:08.399566   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399713   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:08.399904   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:08.400043   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:08.400171   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:08.552541   32020 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:08.552592   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I1028 17:27:30.870343   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (22.317699091s)
	I1028 17:27:30.870408   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:27:31.352565   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m03 minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:27:31.535264   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:27:31.653788   32020 start.go:319] duration metric: took 23.257712014s to joinCluster
	I1028 17:27:31.653906   32020 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:31.654293   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:31.655305   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:27:31.656854   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:31.931462   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:32.007668   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:27:32.008012   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:27:32.008099   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:27:32.008418   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:32.008555   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.008568   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.008580   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.008590   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.012013   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:32.509493   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.509514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.509522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.509526   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.512995   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:33.008792   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.008813   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.008823   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.008831   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.013277   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:33.509021   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.509043   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.509053   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.509059   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.512568   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.009494   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.009514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.009522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.009525   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.012872   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.013477   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:34.508671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.508698   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.508711   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.508717   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.511657   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.009518   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.009538   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.009546   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.009549   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.012353   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.509512   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.509539   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.509551   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.509564   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.513144   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.009477   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.009496   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.009503   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.009508   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.012424   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:36.509250   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.509279   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.509290   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.509295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.512794   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.513405   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:37.008636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.008657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.008668   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.008676   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.011455   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:37.509093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.509123   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.509127   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.512558   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.009185   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.009214   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.009222   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.009226   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.012314   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.508924   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.508943   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.508951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.508955   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.511947   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.008656   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.008679   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.008691   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.008698   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.011261   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.011779   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:39.509251   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.509272   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.509279   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.509283   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.512371   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:40.009266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.009299   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.013354   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:40.509289   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.509307   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.509315   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.509320   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.512591   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:41.009123   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.009146   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.009163   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.014310   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:41.014943   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:41.509077   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.509126   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.509134   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.512425   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.008587   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.008609   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.008621   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.008627   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.012270   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.509586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.509607   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.509615   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.509621   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.512638   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:43.009220   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.009238   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.009248   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.009256   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.012180   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.508622   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.508646   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.508656   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.508660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.511470   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.512019   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:44.009130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.009150   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.009161   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.012525   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:44.509423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.509446   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.509457   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.509462   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.513302   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.009198   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.009218   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.009225   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.009230   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.012566   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.508621   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.508641   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.508649   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.508652   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.511562   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:45.512081   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:46.008747   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.008770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.008778   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.008782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.011847   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:46.509246   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.509269   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.509277   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.509281   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.512939   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:47.008680   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.008703   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.008713   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.008719   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.030138   32020 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1028 17:27:47.508630   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.508650   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.508657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.508663   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.514479   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:47.515054   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:48.008911   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.008931   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.008940   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.008944   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.012001   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:48.509098   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.509121   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.509132   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.509138   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.512351   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.008615   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.008635   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.008643   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.008647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.011780   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.508700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.508723   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.508731   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.508735   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.511993   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.008627   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.008648   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.008657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.008660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.012285   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.012911   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:50.509280   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.509301   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.509309   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.509321   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.512855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.009269   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.009303   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.012097   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.509273   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.509293   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.509304   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.509309   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.512305   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.513072   32020 node_ready.go:49] node "ha-381619-m03" has status "Ready":"True"
	I1028 17:27:51.513099   32020 node_ready.go:38] duration metric: took 19.504662706s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:51.513110   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:51.513182   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:51.513193   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.513203   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.513209   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.518727   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.525983   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.526072   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:27:51.526088   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.526100   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.526111   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.531963   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.532739   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.532753   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.532761   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.532764   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.535083   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.535631   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.535649   32020 pod_ready.go:82] duration metric: took 9.646144ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535657   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:27:51.535707   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.535714   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.535721   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.538224   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.538964   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.538979   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.538986   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.538990   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.541964   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.542349   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.542364   32020 pod_ready.go:82] duration metric: took 6.701109ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542375   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542424   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:27:51.542434   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.542441   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.542447   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.544839   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.545361   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.545376   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.545385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.545392   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.547384   32020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 17:27:51.547876   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.547890   32020 pod_ready.go:82] duration metric: took 5.50604ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547898   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:27:51.547944   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.547951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.547954   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.549977   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.550423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:51.550435   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.550442   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.550445   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.552459   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.553082   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.553099   32020 pod_ready.go:82] duration metric: took 5.194272ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.553110   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.709397   32020 request.go:632] Waited for 156.217787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709446   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709451   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.709458   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.709461   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.712548   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.909629   32020 request.go:632] Waited for 196.367534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909689   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.909700   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.909708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.918132   32020 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 17:27:51.918809   32020 pod_ready.go:93] pod "etcd-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.918828   32020 pod_ready.go:82] duration metric: took 365.711465ms for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.918850   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.109303   32020 request.go:632] Waited for 190.370368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109365   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109373   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.109383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.109388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.112392   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.309408   32020 request.go:632] Waited for 196.27481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309460   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309464   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.309471   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.309475   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.312195   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.312752   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.312777   32020 pod_ready.go:82] duration metric: took 393.917667ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.312791   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.509760   32020 request.go:632] Waited for 196.900981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509849   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509861   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.509872   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.509878   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.513709   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.709720   32020 request.go:632] Waited for 195.19818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709771   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709777   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.709784   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.709789   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.712910   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.713496   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.713513   32020 pod_ready.go:82] duration metric: took 400.71419ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.713525   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.910080   32020 request.go:632] Waited for 196.490754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910131   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910138   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.910148   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.910155   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.913570   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.109611   32020 request.go:632] Waited for 195.067242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109675   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109680   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.109688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.109692   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.112419   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:53.113243   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.113258   32020 pod_ready.go:82] duration metric: took 399.726328ms for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.113269   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.309322   32020 request.go:632] Waited for 195.985489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309373   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309378   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.309385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.309389   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.312514   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.509641   32020 request.go:632] Waited for 196.355986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509756   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.509788   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.509809   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.513067   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.513631   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.513648   32020 pod_ready.go:82] duration metric: took 400.372385ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.513660   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.709756   32020 request.go:632] Waited for 196.030975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709821   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709829   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.709838   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.709847   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.713250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.910289   32020 request.go:632] Waited for 196.241506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910347   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910352   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.910360   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.910365   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.913501   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.914111   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.914128   32020 pod_ready.go:82] duration metric: took 400.460847ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.914138   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.110262   32020 request.go:632] Waited for 196.057341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110321   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110328   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.110338   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.110344   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.113686   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.309625   32020 request.go:632] Waited for 195.198525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309704   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.309715   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.309724   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.312970   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.313530   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.313550   32020 pod_ready.go:82] duration metric: took 399.405564ms for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.313561   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.509582   32020 request.go:632] Waited for 195.958227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509651   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.509664   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.509669   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.513356   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.709469   32020 request.go:632] Waited for 195.28008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709541   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709547   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.709555   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.709562   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.712778   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.713684   32020 pod_ready.go:93] pod "kube-proxy-2z74r" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.713706   32020 pod_ready.go:82] duration metric: took 400.138051ms for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.713722   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.909768   32020 request.go:632] Waited for 195.979649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909859   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909871   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.909882   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.909893   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.912982   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.110064   32020 request.go:632] Waited for 196.359608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110135   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.110142   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.110148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.113297   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.113778   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.113796   32020 pod_ready.go:82] duration metric: took 400.063804ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.113805   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.309960   32020 request.go:632] Waited for 196.087241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310011   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310017   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.310027   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.310040   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.313630   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.509848   32020 request.go:632] Waited for 195.356609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509902   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509907   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.509917   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.509922   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.513283   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.513872   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.513891   32020 pod_ready.go:82] duration metric: took 400.079859ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.513903   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.709489   32020 request.go:632] Waited for 195.521691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709543   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709558   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.709582   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.709589   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.713346   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.910316   32020 request.go:632] Waited for 196.337736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910371   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910375   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.910383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.910388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.913484   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.914099   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.914115   32020 pod_ready.go:82] duration metric: took 400.201992ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.914124   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.110258   32020 request.go:632] Waited for 196.039546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110326   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110331   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.110337   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.110342   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.113332   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:56.310263   32020 request.go:632] Waited for 196.319737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310334   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310355   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.310370   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.310379   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.313786   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.314505   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.314532   32020 pod_ready.go:82] duration metric: took 400.399291ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.314546   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.510327   32020 request.go:632] Waited for 195.699418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510378   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510383   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.510390   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.510394   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.513464   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.709328   32020 request.go:632] Waited for 195.274185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709385   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709391   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.709398   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.709403   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.712740   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.713420   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.713436   32020 pod_ready.go:82] duration metric: took 398.882403ms for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.713446   32020 pod_ready.go:39] duration metric: took 5.200325366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:56.713469   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:27:56.713519   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:27:56.729002   32020 api_server.go:72] duration metric: took 25.075050157s to wait for apiserver process to appear ...
	I1028 17:27:56.729025   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:27:56.729051   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:27:56.734141   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:27:56.734212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:27:56.734223   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.734234   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.734242   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.735154   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:27:56.735212   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:27:56.735228   32020 api_server.go:131] duration metric: took 6.196303ms to wait for apiserver health ...
	I1028 17:27:56.735237   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:27:56.909657   32020 request.go:632] Waited for 174.332812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909707   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909712   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.909720   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.909725   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.915545   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:56.922175   32020 system_pods.go:59] 24 kube-system pods found
	I1028 17:27:56.922215   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:56.922225   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:56.922230   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:56.922235   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:56.922240   32020 system_pods.go:61] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:56.922248   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:56.922253   32020 system_pods.go:61] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:56.922259   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:56.922267   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:56.922273   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:56.922281   32020 system_pods.go:61] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:56.922288   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:56.922294   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:56.922302   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:56.922308   32020 system_pods.go:61] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:56.922317   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:56.922327   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:56.922335   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:56.922341   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:56.922348   32020 system_pods.go:61] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:56.922352   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:56.922355   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:56.922361   32020 system_pods.go:61] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:56.922364   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:56.922369   32020 system_pods.go:74] duration metric: took 187.124012ms to wait for pod list to return data ...
	I1028 17:27:56.922378   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:27:57.109949   32020 request.go:632] Waited for 187.506133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110004   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110012   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.110022   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.110033   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.113502   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:57.113628   32020 default_sa.go:45] found service account: "default"
	I1028 17:27:57.113645   32020 default_sa.go:55] duration metric: took 191.260682ms for default service account to be created ...
	I1028 17:27:57.113656   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:27:57.309925   32020 request.go:632] Waited for 196.205305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310024   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310036   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.310047   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.310053   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.315888   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:57.322856   32020 system_pods.go:86] 24 kube-system pods found
	I1028 17:27:57.322880   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:57.322886   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:57.322890   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:57.322893   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:57.322897   32020 system_pods.go:89] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:57.322900   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:57.322904   32020 system_pods.go:89] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:57.322907   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:57.322918   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:57.322927   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:57.322932   32020 system_pods.go:89] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:57.322940   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:57.322946   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:57.322951   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:57.322958   32020 system_pods.go:89] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:57.322966   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:57.322971   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:57.322978   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:57.322986   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:57.322991   32020 system_pods.go:89] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:57.322999   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:57.323006   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:57.323011   32020 system_pods.go:89] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:57.323018   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:57.323027   32020 system_pods.go:126] duration metric: took 209.364489ms to wait for k8s-apps to be running ...
	I1028 17:27:57.323045   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:27:57.323123   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:57.338248   32020 system_svc.go:56] duration metric: took 15.198158ms WaitForService to wait for kubelet
	I1028 17:27:57.338268   32020 kubeadm.go:582] duration metric: took 25.684324158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:27:57.338294   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:27:57.509596   32020 request.go:632] Waited for 171.215252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509662   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509677   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.509688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.509699   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.514522   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:57.515701   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515733   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515769   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515779   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515785   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515800   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515810   32020 node_conditions.go:105] duration metric: took 177.507704ms to run NodePressure ...
	I1028 17:27:57.515829   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:27:57.515863   32020 start.go:255] writing updated cluster config ...
	I1028 17:27:57.516171   32020 ssh_runner.go:195] Run: rm -f paused
	I1028 17:27:57.567306   32020 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:27:57.569290   32020 out.go:177] * Done! kubectl is now configured to use "ha-381619" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.010477841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136710010459719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69f93577-28b5-465c-b203-feb9fb844c97 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.011111879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=152d4682-2353-4b53-a4e6-e9c4195b5692 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.011184102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=152d4682-2353-4b53-a4e6-e9c4195b5692 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.011400246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=152d4682-2353-4b53-a4e6-e9c4195b5692 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.064312350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b5f8951-5ced-4a49-ba23-5cb4d18625d6 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.064429291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b5f8951-5ced-4a49-ba23-5cb4d18625d6 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.066274679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=509a2edc-29b4-403a-9dcf-8a9c100dee1f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.066784907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136710066761739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=509a2edc-29b4-403a-9dcf-8a9c100dee1f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.070401872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82272b16-51ea-41cf-9639-585435e10c6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.070474601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82272b16-51ea-41cf-9639-585435e10c6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.070753104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82272b16-51ea-41cf-9639-585435e10c6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.112260356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db44b8ee-cc22-4737-8c20-a60c4916da02 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.112358156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db44b8ee-cc22-4737-8c20-a60c4916da02 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.113445364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd6e50e9-4357-475f-8af1-1c32a33aab5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.113815074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136710113796649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd6e50e9-4357-475f-8af1-1c32a33aab5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.114454976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5d43ca8-eba3-4e72-84b8-c419a054fb24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.114528909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5d43ca8-eba3-4e72-84b8-c419a054fb24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.114814092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5d43ca8-eba3-4e72-84b8-c419a054fb24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.156070387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c34ed024-80e3-438d-9aea-149e68f07d6a name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.156281611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c34ed024-80e3-438d-9aea-149e68f07d6a name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.157450339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56ccaebc-6b8e-42dd-8c20-68d0c50eeac9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.157854927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136710157833303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56ccaebc-6b8e-42dd-8c20-68d0c50eeac9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.158627087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb80c907-04b3-4f75-ab31-dcc9549977e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.158699255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb80c907-04b3-4f75-ab31-dcc9549977e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:50 ha-381619 crio[660]: time="2024-10-28 17:31:50.159082034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb80c907-04b3-4f75-ab31-dcc9549977e8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb3c00b93a7e6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   32dd7ef5c8db8       coredns-7c65d6cfc9-mtmvl
	439a12fd4f2e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   a8d9ef07a9de9       coredns-7c65d6cfc9-6lp7c
	32b25385ac6d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    6 minutes ago       Running             storage-provisioner       0                   cdf8a7008daaa       storage-provisioner
	02eaa5b848022       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                    6 minutes ago       Running             kindnet-cni               0                   ec93f4cb498de       kindnet-vj9vj
	4c2af4b0e8f70       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                    6 minutes ago       Running             kube-proxy                0                   31e8db8e13561       kube-proxy-mqdtj
	8820dc5a1a258       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215   6 minutes ago       Running             kube-vip                  0                   0440b64671662       kube-vip-ha-381619
	a2a4ad9e37b9c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                    6 minutes ago       Running             kube-apiserver            0                   8535275eaad56       kube-apiserver-ha-381619
	c4311ab52a438       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                    6 minutes ago       Running             kube-controller-manager   0                   75b5ea16f2e6b       kube-controller-manager-ha-381619
	5d299a6ffacac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                    6 minutes ago       Running             etcd                      0                   2d476f176dee3       etcd-ha-381619
	8f6c077dbde89       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                    6 minutes ago       Running             kube-scheduler            0                   2c5f11da0112e       kube-scheduler-ha-381619
	
	
	==> coredns [439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f] <==
	[INFO] 10.244.2.2:53226 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001368106s
	[INFO] 10.244.2.2:36312 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118066s
	[INFO] 10.244.1.2:38518 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000323292s
	[INFO] 10.244.1.2:47890 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000118239s
	[INFO] 10.244.1.2:45070 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000130482s
	[INFO] 10.244.1.2:39687 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001925125s
	[INFO] 10.244.2.3:53812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151587s
	[INFO] 10.244.2.3:54592 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180193s
	[INFO] 10.244.2.3:46470 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138925s
	[INFO] 10.244.2.2:48981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776352s
	[INFO] 10.244.2.2:35249 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131241s
	[INFO] 10.244.2.2:53917 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177037s
	[INFO] 10.244.2.2:34049 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001120542s
	[INFO] 10.244.1.2:35278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111663s
	[INFO] 10.244.1.2:37962 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106563s
	[INFO] 10.244.1.2:40545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246646s
	[INFO] 10.244.1.2:40814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215904s
	[INFO] 10.244.2.3:49806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000229773s
	[INFO] 10.244.2.2:44763 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117588s
	[INFO] 10.244.2.3:48756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125652s
	[INFO] 10.244.2.3:41328 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177165s
	[INFO] 10.244.2.3:35650 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137462s
	[INFO] 10.244.2.2:60478 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163829s
	[INFO] 10.244.2.2:51252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106643s
	[INFO] 10.244.1.2:56942 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137828s
	
	
	==> coredns [fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30] <==
	[INFO] 10.244.2.3:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131477s
	[INFO] 10.244.2.2:46692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196624s
	[INFO] 10.244.2.2:38402 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226272s
	[INFO] 10.244.2.2:34845 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153045s
	[INFO] 10.244.2.2:49870 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121016s
	[INFO] 10.244.1.2:51535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001893779s
	[INFO] 10.244.1.2:36412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109955s
	[INFO] 10.244.1.2:53434 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000734s
	[INFO] 10.244.1.2:38007 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101464s
	[INFO] 10.244.2.3:39546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.2.3:49299 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158392s
	[INFO] 10.244.2.3:42607 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102312s
	[INFO] 10.244.2.2:36855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150344s
	[INFO] 10.244.2.2:46374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00016867s
	[INFO] 10.244.2.2:37275 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112218s
	[INFO] 10.244.1.2:41523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017259s
	[INFO] 10.244.1.2:43696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000347465s
	[INFO] 10.244.1.2:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161099s
	[INFO] 10.244.1.2:59192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118287s
	[INFO] 10.244.2.3:42470 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243243s
	[INFO] 10.244.2.2:35932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020307s
	[INFO] 10.244.2.2:39597 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184178s
	[INFO] 10.244.1.2:43973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139891s
	[INFO] 10.244.1.2:41644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171411s
	[INFO] 10.244.1.2:47984 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086921s
	
	
	==> describe nodes <==
	Name:               ha-381619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:25:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-381619
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ff487634ba146ebb8929cc99763c422
	  System UUID:                1ff48763-4ba1-46eb-b892-9cc99763c422
	  Boot ID:                    ce5a7712-d088-475f-80ec-c8b7dee605bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6lp7c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 coredns-7c65d6cfc9-mtmvl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 etcd-ha-381619                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m37s
	  kube-system                 kindnet-vj9vj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-apiserver-ha-381619             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-381619    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-mqdtj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-scheduler-ha-381619             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-381619                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 6m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m43s (x7 over 6m43s)  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m43s (x8 over 6m43s)  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x8 over 6m43s)  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m36s                  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s                  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s                  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m33s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  NodeReady                6m20s                  kubelet          Node ha-381619 status is now: NodeReady
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	
	
	Name:               ha-381619-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:26:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:29:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-381619-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe038bc140e34a24bfa4fe915bd6a83f
	  System UUID:                fe038bc1-40e3-4a24-bfa4-fe915bd6a83f
	  Boot ID:                    2395418c-cd94-4285-8c38-7cd31a1df92a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dxwnw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-381619-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m40s
	  kube-system                 kindnet-2ggdz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m40s
	  kube-system                 kube-apiserver-ha-381619-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-ha-381619-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-nrfgq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-scheduler-ha-381619-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-vip-ha-381619-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m40s (x2 over 5m41s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x2 over 5m41s)  kubelet          Node ha-381619-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x2 over 5m41s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeReady                5m18s                  kubelet          Node ha-381619-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-381619-m02 status is now: NodeNotReady
	
	
	Name:               ha-381619-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:27:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-381619-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f056208103704b70bfb827d2e01fcbd6
	  System UUID:                f0562081-0370-4b70-bfb8-27d2e01fcbd6
	  Boot ID:                    3c41c87b-23bb-455f-8665-1ca87b736f8b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-26cg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  default                     busybox-7dff88458-9n6bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-381619-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kindnet-82dqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m22s
	  kube-system                 kube-apiserver-ha-381619-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-ha-381619-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-2z74r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-ha-381619-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-vip-ha-381619-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x8 over 4m22s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x8 over 4m22s)  kubelet          Node ha-381619-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x7 over 4m22s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	
	
	Name:               ha-381619-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_28_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:28:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-381619-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c794eda5b61f4b51846d119496d6611f
	  System UUID:                c794eda5-b61f-4b51-846d-119496d6611f
	  Boot ID:                    d054e196-c392-4e7e-a1b3-e459ee7974d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzqx2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-7dwhb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m9s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m9s)  kubelet          Node ha-381619-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m9s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-381619-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 17:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050172] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854623] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.491096] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.570925] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.341236] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.065909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059908] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.181734] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.112783] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.252616] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct28 17:25] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.759910] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.058388] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.418126] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.806365] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +4.131777] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.537990] kauditd_printk_skb: 41 callbacks suppressed
	[  +9.942403] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9] <==
	{"level":"warn","ts":"2024-10-28T17:31:50.397318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.405370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.414603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.419161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.430457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.435315Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"af936484d1d2a2d6","rtt":"8.255996ms","error":"dial tcp 192.168.39.171:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-28T17:31:50.435432Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"af936484d1d2a2d6","rtt":"880.345µs","error":"dial tcp 192.168.39.171:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-28T17:31:50.436354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.442842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.446705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.450120Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.457484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.467049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.475847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.476203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.479548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.482471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.489843Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.495407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.502012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.505385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.508795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.512570Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.518321Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:50.524111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:31:50 up 7 min,  0 users,  load average: 0.07, 0.21, 0.12
	Linux ha-381619 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3] <==
	I1028 17:31:20.292249       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:30.295378       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:30.295542       1 main.go:300] handling current node
	I1028 17:31:30.295590       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:30.295611       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:30.296072       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:30.296113       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:30.296285       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:30.296308       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:40.295696       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:40.295776       1 main.go:300] handling current node
	I1028 17:31:40.295795       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:40.295804       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:40.296160       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:40.296192       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:40.296331       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:40.296358       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:50.300065       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:50.300101       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:50.300348       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:50.300359       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:50.300489       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:50.300496       1 main.go:300] handling current node
	I1028 17:31:50.300514       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:50.300518       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37] <==
	W1028 17:25:12.245785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I1028 17:25:12.247133       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 17:25:12.256065       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 17:25:12.326331       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 17:25:13.936309       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 17:25:13.952773       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 17:25:13.968009       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 17:25:17.830466       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1028 17:25:18.077531       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1028 17:28:07.019815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41404: use of closed network connection
	E1028 17:28:07.205390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41420: use of closed network connection
	E1028 17:28:07.386536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41448: use of closed network connection
	E1028 17:28:07.599536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41470: use of closed network connection
	E1028 17:28:07.775264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41490: use of closed network connection
	E1028 17:28:07.949242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41512: use of closed network connection
	E1028 17:28:08.118133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41522: use of closed network connection
	E1028 17:28:08.303400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41550: use of closed network connection
	E1028 17:28:08.475723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41556: use of closed network connection
	E1028 17:28:08.762057       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47594: use of closed network connection
	E1028 17:28:08.944378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47612: use of closed network connection
	E1028 17:28:09.126803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47636: use of closed network connection
	E1028 17:28:09.297149       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47658: use of closed network connection
	E1028 17:28:09.471140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47674: use of closed network connection
	E1028 17:28:09.647026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47704: use of closed network connection
	W1028 17:29:32.257515       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.230]
	
	
	==> kube-controller-manager [c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8] <==
	I1028 17:28:42.026011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.036622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.060198       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.297173       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-381619-m04"
	I1028 17:28:42.386481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.396569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.781672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.951532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.966339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:46.926084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:47.034432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:52.333791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:29:04.463505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:06.946376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:12.658007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:30:06.972035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:06.972340       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:30:06.993167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:07.005350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.940759ms"
	I1028 17:30:07.006727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.8µs"
	I1028 17:30:07.346197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:12.214622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:31.329575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619"
	
	
	==> kube-proxy [4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 17:25:18.698349       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 17:25:18.711046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E1028 17:25:18.711157       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:25:18.745433       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 17:25:18.745462       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 17:25:18.745490       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:25:18.747834       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:25:18.748160       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:25:18.748312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:25:18.749989       1 config.go:199] "Starting service config controller"
	I1028 17:25:18.750071       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:25:18.750117       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:25:18.750134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:25:18.750598       1 config.go:328] "Starting node config controller"
	I1028 17:25:18.751738       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:25:18.851210       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:25:18.851309       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:25:18.852898       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b] <==
	E1028 17:25:11.721217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.842707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:25:11.842776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.845287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:25:11.848083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.886433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:25:11.886602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1028 17:25:14.002937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:27:58.460072       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="568dfe45-5437-4cfd-8d20-2fa1e33d8999" pod="default/busybox-7dff88458-9n6bb" assumedNode="ha-381619-m03" currentNode="ha-381619-m02"
	E1028 17:27:58.471238       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m02"
	E1028 17:27:58.471407       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 568dfe45-5437-4cfd-8d20-2fa1e33d8999(default/busybox-7dff88458-9n6bb) was assumed on ha-381619-m02 but assigned to ha-381619-m03" pod="default/busybox-7dff88458-9n6bb"
	E1028 17:27:58.471445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" pod="default/busybox-7dff88458-9n6bb"
	I1028 17:27:58.471522       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m03"
	E1028 17:28:42.093317       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.093832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9291bc3b-2fa3-4a6c-99d3-7bb2a5721b25(kube-system/kindnet-fzqx2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fzqx2"
	E1028 17:28:42.094010       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-fzqx2"
	I1028 17:28:42.094225       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.149948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.154547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15a36ca9-85be-4b6a-8e4a-31495d13a0c1(kube-system/kube-proxy-7dwhb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7dwhb"
	E1028 17:28:42.156945       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" pod="kube-system/kube-proxy-7dwhb"
	I1028 17:28:42.157115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.164640       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	E1028 17:28:42.164715       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 61afb85d-818e-40a2-ad14-87c5f4541d0e(kube-system/kindnet-p6x26) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p6x26"
	E1028 17:28:42.164729       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-p6x26"
	I1028 17:28:42.164745       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	
	
	==> kubelet <==
	Oct 28 17:30:13 ha-381619 kubelet[1301]: E1028 17:30:13.976259    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136613975105937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:13 ha-381619 kubelet[1301]: E1028 17:30:13.976959    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136613975105937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979164    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979443    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.980958    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.982957    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988254    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988294    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989574    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989617    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996610    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996710    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.872137    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997852    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997963    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:23.999904    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:24.000328    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001784    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001829    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003002    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003044    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-381619 -n ha-381619
helpers_test.go:261: (dbg) Run:  kubectl --context ha-381619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr: (4.152812325s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-381619 -n ha-381619
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 logs -n 25: (1.300268869s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m03_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m04 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp testdata/cp-test.txt                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m04_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03:/home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m03 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-381619 node stop m02 -v=7                                                    | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-381619 node start m02 -v=7                                                   | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:31 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:24:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:24:32.704402   32020 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:24:32.704551   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704563   32020 out.go:358] Setting ErrFile to fd 2...
	I1028 17:24:32.704569   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704718   32020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:24:32.705246   32020 out.go:352] Setting JSON to false
	I1028 17:24:32.706049   32020 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4016,"bootTime":1730132257,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:24:32.706140   32020 start.go:139] virtualization: kvm guest
	I1028 17:24:32.708076   32020 out.go:177] * [ha-381619] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:24:32.709709   32020 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:24:32.709708   32020 notify.go:220] Checking for updates...
	I1028 17:24:32.711979   32020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:24:32.713179   32020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:24:32.714308   32020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.715427   32020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:24:32.716562   32020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:24:32.717898   32020 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:24:32.750233   32020 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 17:24:32.751376   32020 start.go:297] selected driver: kvm2
	I1028 17:24:32.751386   32020 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:24:32.751396   32020 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:24:32.752108   32020 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.752174   32020 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:24:32.765779   32020 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:24:32.765818   32020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:24:32.766066   32020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:24:32.766095   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:24:32.766149   32020 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 17:24:32.766159   32020 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:24:32.766215   32020 start.go:340] cluster config:
	{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 17:24:32.766343   32020 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.768753   32020 out.go:177] * Starting "ha-381619" primary control-plane node in "ha-381619" cluster
	I1028 17:24:32.769947   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:32.769974   32020 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:24:32.769982   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:24:32.770049   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:24:32.770062   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:24:32.770342   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:32.770362   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json: {Name:mkd5c3a5f97562236390379745e09449a8badb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:24:32.770497   32020 start.go:360] acquireMachinesLock for ha-381619: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:24:32.770539   32020 start.go:364] duration metric: took 26.277µs to acquireMachinesLock for "ha-381619"
	I1028 17:24:32.770561   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:24:32.770606   32020 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 17:24:32.772872   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:24:32.772986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:32.773028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:32.786246   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I1028 17:24:32.786651   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:32.787204   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:24:32.787223   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:32.787585   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:32.787761   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:32.787890   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:32.788041   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:24:32.788072   32020 client.go:168] LocalClient.Create starting
	I1028 17:24:32.788105   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:24:32.788134   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788152   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788202   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:24:32.788220   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788232   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788246   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:24:32.788258   32020 main.go:141] libmachine: (ha-381619) Calling .PreCreateCheck
	I1028 17:24:32.788587   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:32.789017   32020 main.go:141] libmachine: Creating machine...
	I1028 17:24:32.789034   32020 main.go:141] libmachine: (ha-381619) Calling .Create
	I1028 17:24:32.789161   32020 main.go:141] libmachine: (ha-381619) Creating KVM machine...
	I1028 17:24:32.790254   32020 main.go:141] libmachine: (ha-381619) DBG | found existing default KVM network
	I1028 17:24:32.790889   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.790760   32043 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1028 17:24:32.790924   32020 main.go:141] libmachine: (ha-381619) DBG | created network xml: 
	I1028 17:24:32.790942   32020 main.go:141] libmachine: (ha-381619) DBG | <network>
	I1028 17:24:32.790953   32020 main.go:141] libmachine: (ha-381619) DBG |   <name>mk-ha-381619</name>
	I1028 17:24:32.790960   32020 main.go:141] libmachine: (ha-381619) DBG |   <dns enable='no'/>
	I1028 17:24:32.790971   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.790981   32020 main.go:141] libmachine: (ha-381619) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 17:24:32.791022   32020 main.go:141] libmachine: (ha-381619) DBG |     <dhcp>
	I1028 17:24:32.791042   32020 main.go:141] libmachine: (ha-381619) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 17:24:32.791053   32020 main.go:141] libmachine: (ha-381619) DBG |     </dhcp>
	I1028 17:24:32.791062   32020 main.go:141] libmachine: (ha-381619) DBG |   </ip>
	I1028 17:24:32.791070   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.791079   32020 main.go:141] libmachine: (ha-381619) DBG | </network>
	I1028 17:24:32.791092   32020 main.go:141] libmachine: (ha-381619) DBG | 
	I1028 17:24:32.795776   32020 main.go:141] libmachine: (ha-381619) DBG | trying to create private KVM network mk-ha-381619 192.168.39.0/24...
	I1028 17:24:32.856590   32020 main.go:141] libmachine: (ha-381619) DBG | private KVM network mk-ha-381619 192.168.39.0/24 created
	I1028 17:24:32.856623   32020 main.go:141] libmachine: (ha-381619) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:32.856641   32020 main.go:141] libmachine: (ha-381619) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:24:32.856686   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.856608   32043 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.856733   32020 main.go:141] libmachine: (ha-381619) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:24:33.109141   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.109021   32043 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa...
	I1028 17:24:33.382423   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382288   32043 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk...
	I1028 17:24:33.382457   32020 main.go:141] libmachine: (ha-381619) DBG | Writing magic tar header
	I1028 17:24:33.382473   32020 main.go:141] libmachine: (ha-381619) DBG | Writing SSH key tar header
	I1028 17:24:33.382487   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382434   32043 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:33.382577   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 (perms=drwx------)
	I1028 17:24:33.382600   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:24:33.382611   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619
	I1028 17:24:33.382624   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:24:33.382636   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:33.382651   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:24:33.382662   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:24:33.382673   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:24:33.382683   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:24:33.382696   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:24:33.382710   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:24:33.382720   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:24:33.382733   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:33.382743   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home
	I1028 17:24:33.382755   32020 main.go:141] libmachine: (ha-381619) DBG | Skipping /home - not owner
	I1028 17:24:33.383729   32020 main.go:141] libmachine: (ha-381619) define libvirt domain using xml: 
	I1028 17:24:33.383753   32020 main.go:141] libmachine: (ha-381619) <domain type='kvm'>
	I1028 17:24:33.383763   32020 main.go:141] libmachine: (ha-381619)   <name>ha-381619</name>
	I1028 17:24:33.383771   32020 main.go:141] libmachine: (ha-381619)   <memory unit='MiB'>2200</memory>
	I1028 17:24:33.383782   32020 main.go:141] libmachine: (ha-381619)   <vcpu>2</vcpu>
	I1028 17:24:33.383791   32020 main.go:141] libmachine: (ha-381619)   <features>
	I1028 17:24:33.383800   32020 main.go:141] libmachine: (ha-381619)     <acpi/>
	I1028 17:24:33.383823   32020 main.go:141] libmachine: (ha-381619)     <apic/>
	I1028 17:24:33.383834   32020 main.go:141] libmachine: (ha-381619)     <pae/>
	I1028 17:24:33.383847   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.383857   32020 main.go:141] libmachine: (ha-381619)   </features>
	I1028 17:24:33.383868   32020 main.go:141] libmachine: (ha-381619)   <cpu mode='host-passthrough'>
	I1028 17:24:33.383876   32020 main.go:141] libmachine: (ha-381619)   
	I1028 17:24:33.383886   32020 main.go:141] libmachine: (ha-381619)   </cpu>
	I1028 17:24:33.383894   32020 main.go:141] libmachine: (ha-381619)   <os>
	I1028 17:24:33.383901   32020 main.go:141] libmachine: (ha-381619)     <type>hvm</type>
	I1028 17:24:33.383912   32020 main.go:141] libmachine: (ha-381619)     <boot dev='cdrom'/>
	I1028 17:24:33.383921   32020 main.go:141] libmachine: (ha-381619)     <boot dev='hd'/>
	I1028 17:24:33.383934   32020 main.go:141] libmachine: (ha-381619)     <bootmenu enable='no'/>
	I1028 17:24:33.383944   32020 main.go:141] libmachine: (ha-381619)   </os>
	I1028 17:24:33.383952   32020 main.go:141] libmachine: (ha-381619)   <devices>
	I1028 17:24:33.383961   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='cdrom'>
	I1028 17:24:33.383974   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/boot2docker.iso'/>
	I1028 17:24:33.383984   32020 main.go:141] libmachine: (ha-381619)       <target dev='hdc' bus='scsi'/>
	I1028 17:24:33.383994   32020 main.go:141] libmachine: (ha-381619)       <readonly/>
	I1028 17:24:33.384049   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384071   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='disk'>
	I1028 17:24:33.384079   32020 main.go:141] libmachine: (ha-381619)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:24:33.384087   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk'/>
	I1028 17:24:33.384092   32020 main.go:141] libmachine: (ha-381619)       <target dev='hda' bus='virtio'/>
	I1028 17:24:33.384099   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384104   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384111   32020 main.go:141] libmachine: (ha-381619)       <source network='mk-ha-381619'/>
	I1028 17:24:33.384116   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384122   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384127   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384134   32020 main.go:141] libmachine: (ha-381619)       <source network='default'/>
	I1028 17:24:33.384140   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384146   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384151   32020 main.go:141] libmachine: (ha-381619)     <serial type='pty'>
	I1028 17:24:33.384157   32020 main.go:141] libmachine: (ha-381619)       <target port='0'/>
	I1028 17:24:33.384180   32020 main.go:141] libmachine: (ha-381619)     </serial>
	I1028 17:24:33.384203   32020 main.go:141] libmachine: (ha-381619)     <console type='pty'>
	I1028 17:24:33.384217   32020 main.go:141] libmachine: (ha-381619)       <target type='serial' port='0'/>
	I1028 17:24:33.384235   32020 main.go:141] libmachine: (ha-381619)     </console>
	I1028 17:24:33.384247   32020 main.go:141] libmachine: (ha-381619)     <rng model='virtio'>
	I1028 17:24:33.384258   32020 main.go:141] libmachine: (ha-381619)       <backend model='random'>/dev/random</backend>
	I1028 17:24:33.384267   32020 main.go:141] libmachine: (ha-381619)     </rng>
	I1028 17:24:33.384291   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384303   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384320   32020 main.go:141] libmachine: (ha-381619)   </devices>
	I1028 17:24:33.384331   32020 main.go:141] libmachine: (ha-381619) </domain>
	I1028 17:24:33.384339   32020 main.go:141] libmachine: (ha-381619) 
	I1028 17:24:33.388368   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:d7:31:89 in network default
	I1028 17:24:33.388983   32020 main.go:141] libmachine: (ha-381619) Ensuring networks are active...
	I1028 17:24:33.389001   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:33.389577   32020 main.go:141] libmachine: (ha-381619) Ensuring network default is active
	I1028 17:24:33.389893   32020 main.go:141] libmachine: (ha-381619) Ensuring network mk-ha-381619 is active
	I1028 17:24:33.390366   32020 main.go:141] libmachine: (ha-381619) Getting domain xml...
	I1028 17:24:33.390966   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:34.558865   32020 main.go:141] libmachine: (ha-381619) Waiting to get IP...
	I1028 17:24:34.559610   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.559962   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.559982   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.559945   32043 retry.go:31] will retry after 257.179075ms: waiting for machine to come up
	I1028 17:24:34.818320   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.818636   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.818664   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.818591   32043 retry.go:31] will retry after 336.999416ms: waiting for machine to come up
	I1028 17:24:35.156955   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.157385   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.157410   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.157352   32043 retry.go:31] will retry after 376.336351ms: waiting for machine to come up
	I1028 17:24:35.534739   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.535148   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.535176   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.535109   32043 retry.go:31] will retry after 414.103212ms: waiting for machine to come up
	I1028 17:24:35.950512   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.950871   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.950902   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.950833   32043 retry.go:31] will retry after 701.752446ms: waiting for machine to come up
	I1028 17:24:36.653573   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:36.653919   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:36.653945   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:36.653879   32043 retry.go:31] will retry after 793.432647ms: waiting for machine to come up
	I1028 17:24:37.448827   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:37.449212   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:37.449233   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:37.449175   32043 retry.go:31] will retry after 894.965011ms: waiting for machine to come up
	I1028 17:24:38.345655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:38.346083   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:38.346104   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:38.346040   32043 retry.go:31] will retry after 955.035568ms: waiting for machine to come up
	I1028 17:24:39.303112   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:39.303513   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:39.303566   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:39.303470   32043 retry.go:31] will retry after 1.649236041s: waiting for machine to come up
	I1028 17:24:40.955622   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:40.956156   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:40.956183   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:40.956118   32043 retry.go:31] will retry after 1.776451571s: waiting for machine to come up
	I1028 17:24:42.733883   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:42.734354   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:42.734378   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:42.734330   32043 retry.go:31] will retry after 2.290450392s: waiting for machine to come up
	I1028 17:24:45.027299   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:45.027697   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:45.027727   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:45.027647   32043 retry.go:31] will retry after 3.000171726s: waiting for machine to come up
	I1028 17:24:48.029293   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:48.029625   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:48.029642   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:48.029599   32043 retry.go:31] will retry after 3.464287385s: waiting for machine to come up
	I1028 17:24:51.498145   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:51.498494   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:51.498520   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:51.498450   32043 retry.go:31] will retry after 4.798676944s: waiting for machine to come up
	I1028 17:24:56.301062   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301461   32020 main.go:141] libmachine: (ha-381619) Found IP for machine: 192.168.39.230
	I1028 17:24:56.301476   32020 main.go:141] libmachine: (ha-381619) Reserving static IP address...
	I1028 17:24:56.301485   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has current primary IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301800   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find host DHCP lease matching {name: "ha-381619", mac: "52:54:00:bf:e3:f2", ip: "192.168.39.230"} in network mk-ha-381619
	I1028 17:24:56.367996   32020 main.go:141] libmachine: (ha-381619) Reserved static IP address: 192.168.39.230
	I1028 17:24:56.368025   32020 main.go:141] libmachine: (ha-381619) Waiting for SSH to be available...
	I1028 17:24:56.368033   32020 main.go:141] libmachine: (ha-381619) DBG | Getting to WaitForSSH function...
	I1028 17:24:56.370488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.370848   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.370872   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.371022   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH client type: external
	I1028 17:24:56.371056   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa (-rw-------)
	I1028 17:24:56.371091   32020 main.go:141] libmachine: (ha-381619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:24:56.371104   32020 main.go:141] libmachine: (ha-381619) DBG | About to run SSH command:
	I1028 17:24:56.371114   32020 main.go:141] libmachine: (ha-381619) DBG | exit 0
	I1028 17:24:56.492195   32020 main.go:141] libmachine: (ha-381619) DBG | SSH cmd err, output: <nil>: 
	I1028 17:24:56.492449   32020 main.go:141] libmachine: (ha-381619) KVM machine creation complete!
	I1028 17:24:56.492777   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:56.493326   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493514   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493649   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:24:56.493664   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:24:56.494850   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:24:56.494862   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:24:56.494867   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:24:56.494872   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.496787   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497152   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.497174   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497302   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.497464   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497595   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497725   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.497885   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.498064   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.498078   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:24:56.595488   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.595509   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:24:56.595519   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.597859   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598187   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.598209   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598403   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.598582   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598880   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.599036   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.599254   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.599265   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:24:56.696771   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:24:56.696858   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:24:56.696872   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:24:56.696881   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697109   32020 buildroot.go:166] provisioning hostname "ha-381619"
	I1028 17:24:56.697130   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697282   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.699770   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700115   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.700139   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700271   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.700441   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700571   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700701   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.700825   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.701013   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.701029   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619 && echo "ha-381619" | sudo tee /etc/hostname
	I1028 17:24:56.814628   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:24:56.814655   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.817104   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817470   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.817491   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817657   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.817827   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.817992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.818124   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.818278   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.818455   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.818475   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:24:56.926794   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.926821   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:24:56.926841   32020 buildroot.go:174] setting up certificates
	I1028 17:24:56.926853   32020 provision.go:84] configureAuth start
	I1028 17:24:56.926865   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.927086   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:56.929479   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929816   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.929835   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929984   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.931934   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932225   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.932249   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932384   32020 provision.go:143] copyHostCerts
	I1028 17:24:56.932411   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932452   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:24:56.932465   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932554   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:24:56.932658   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932682   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:24:56.932692   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932731   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:24:56.932840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932873   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:24:56.932883   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932921   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:24:56.933013   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619 san=[127.0.0.1 192.168.39.230 ha-381619 localhost minikube]
	I1028 17:24:57.000217   32020 provision.go:177] copyRemoteCerts
	I1028 17:24:57.000264   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:24:57.000288   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.002585   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.002859   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.002887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.003010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.003192   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.003327   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.003456   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.082327   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:24:57.082386   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:24:57.108992   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:24:57.109040   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 17:24:57.131168   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:24:57.131225   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:24:57.153241   32020 provision.go:87] duration metric: took 226.378501ms to configureAuth
	I1028 17:24:57.153264   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:24:57.153419   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:24:57.153491   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.155887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156229   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.156255   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156416   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.156589   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156751   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156909   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.157032   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.157170   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.157183   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:24:57.371091   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:24:57.371116   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:24:57.371138   32020 main.go:141] libmachine: (ha-381619) Calling .GetURL
	I1028 17:24:57.372265   32020 main.go:141] libmachine: (ha-381619) DBG | Using libvirt version 6000000
	I1028 17:24:57.374388   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374694   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.374715   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374887   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:24:57.374900   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:24:57.374907   32020 client.go:171] duration metric: took 24.586826396s to LocalClient.Create
	I1028 17:24:57.374929   32020 start.go:167] duration metric: took 24.586887382s to libmachine.API.Create "ha-381619"
	I1028 17:24:57.374942   32020 start.go:293] postStartSetup for "ha-381619" (driver="kvm2")
	I1028 17:24:57.374957   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:24:57.374978   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.375196   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:24:57.375226   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.377231   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377544   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.377561   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377690   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.377841   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.378010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.378127   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.458768   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:24:57.463205   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:24:57.463222   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:24:57.463283   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:24:57.463370   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:24:57.463382   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:24:57.463492   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:24:57.473092   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:24:57.499838   32020 start.go:296] duration metric: took 124.881379ms for postStartSetup
	I1028 17:24:57.499880   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:57.500412   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.502520   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.502817   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.502846   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.503009   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:57.503210   32020 start.go:128] duration metric: took 24.732586487s to createHost
	I1028 17:24:57.503234   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.505276   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505578   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.505602   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505703   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.505855   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.505992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.506115   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.506245   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.506406   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.506418   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:24:57.608878   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136297.586420313
	
	I1028 17:24:57.608900   32020 fix.go:216] guest clock: 1730136297.586420313
	I1028 17:24:57.608919   32020 fix.go:229] Guest: 2024-10-28 17:24:57.586420313 +0000 UTC Remote: 2024-10-28 17:24:57.503223131 +0000 UTC m=+24.834191366 (delta=83.197182ms)
	I1028 17:24:57.608956   32020 fix.go:200] guest clock delta is within tolerance: 83.197182ms
	I1028 17:24:57.608963   32020 start.go:83] releasing machines lock for "ha-381619", held for 24.838412899s
	I1028 17:24:57.608987   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.609175   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.611488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611798   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.611830   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611946   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612411   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612586   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612684   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:24:57.612719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.612770   32020 ssh_runner.go:195] Run: cat /version.json
	I1028 17:24:57.612787   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.615260   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615428   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615614   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615648   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615673   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615698   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615759   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615940   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615944   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616269   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616272   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.616376   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.711561   32020 ssh_runner.go:195] Run: systemctl --version
	I1028 17:24:57.717385   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:24:57.881204   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:24:57.887117   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:24:57.887178   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:24:57.902953   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:24:57.902971   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:24:57.903029   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:24:57.919599   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:24:57.932865   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:24:57.932911   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:24:57.945714   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:24:57.958712   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:24:58.074716   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:24:58.228971   32020 docker.go:233] disabling docker service ...
	I1028 17:24:58.229043   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:24:58.242560   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:24:58.255313   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:24:58.370441   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:24:58.483893   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:24:58.497247   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:24:58.514703   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:24:58.514757   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.524413   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:24:58.524490   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.534125   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.543414   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.553077   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:24:58.562606   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.572154   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.588419   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.597992   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:24:58.606565   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:24:58.606613   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:24:58.618268   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:24:58.627230   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:24:58.734287   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:24:58.826354   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:24:58.826428   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:24:58.830997   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:24:58.831057   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:24:58.834579   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:24:58.876875   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:24:58.876953   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.903643   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.932572   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:24:58.933808   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:58.935970   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936230   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:58.936257   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936509   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:24:58.940296   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:24:58.952574   32020 kubeadm.go:883] updating cluster {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:24:58.952676   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:58.952732   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:24:58.984654   32020 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 17:24:58.984732   32020 ssh_runner.go:195] Run: which lz4
	I1028 17:24:58.988394   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 17:24:58.988478   32020 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 17:24:58.992506   32020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 17:24:58.992533   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 17:25:00.255551   32020 crio.go:462] duration metric: took 1.267100193s to copy over tarball
	I1028 17:25:00.255628   32020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 17:25:02.245448   32020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.989785325s)
	I1028 17:25:02.245479   32020 crio.go:469] duration metric: took 1.989902074s to extract the tarball
	I1028 17:25:02.245485   32020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 17:25:02.282635   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:25:02.327962   32020 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:25:02.327983   32020 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:25:02.327990   32020 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.2 crio true true} ...
	I1028 17:25:02.328079   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:02.328139   32020 ssh_runner.go:195] Run: crio config
	I1028 17:25:02.370696   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:02.370725   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:02.370738   32020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:25:02.370766   32020 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-381619 NodeName:ha-381619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:25:02.370888   32020 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-381619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:25:02.370908   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:02.370947   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:02.386589   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:02.386701   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:02.386768   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:02.396553   32020 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:25:02.396617   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 17:25:02.405738   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 17:25:02.421400   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:02.437117   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 17:25:02.452375   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 17:25:02.467922   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:02.471573   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:02.483093   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:02.609045   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:02.625565   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.230
	I1028 17:25:02.625588   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:02.625605   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.625774   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:02.625839   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:02.625856   32020 certs.go:256] generating profile certs ...
	I1028 17:25:02.625920   32020 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:02.625937   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt with IP's: []
	I1028 17:25:02.808278   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt ...
	I1028 17:25:02.808301   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt: {Name:mkc46e4b9b851301d42b46f45c8b044b11edfb36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808454   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key ...
	I1028 17:25:02.808464   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key: {Name:mkd681d3c01379608131f30441747317e91c7a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808570   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb
	I1028 17:25:02.808586   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.254]
	I1028 17:25:03.000249   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb ...
	I1028 17:25:03.000276   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb: {Name:mka7f7f8394389959cb184a46e51c1572954cddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000436   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb ...
	I1028 17:25:03.000449   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb: {Name:mk9ae1b9eef85a6c1bbc7739c982c84bfb111d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000555   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:03.000643   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:03.000695   32020 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:03.000710   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt with IP's: []
	I1028 17:25:03.126776   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt ...
	I1028 17:25:03.126802   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt: {Name:mk682452f5be7b32ad3e949275f7af954945db7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.126938   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key ...
	I1028 17:25:03.126948   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key: {Name:mk5feeb9713d67bfc630ef82b40280ce400bc4ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.127009   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:03.127027   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:03.127041   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:03.127053   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:03.127070   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:03.127083   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:03.127094   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:03.127106   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:03.127161   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:03.127194   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:03.127204   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:03.127228   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:03.127253   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:03.127274   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:03.127311   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:03.127335   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.127348   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.127360   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.127858   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:03.153264   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:03.175704   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:03.198131   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:03.220379   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 17:25:03.243352   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:25:03.265623   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:03.287951   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:03.312260   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:03.336494   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:03.363576   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:03.401524   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:25:03.430796   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:03.437428   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:03.448106   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452501   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452553   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.458194   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:03.468982   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:03.479358   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483520   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483564   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.488936   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:03.499033   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:03.509212   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513380   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513413   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.518680   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:03.528774   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:03.532547   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:03.532597   32020 kubeadm.go:392] StartCluster: {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:03.532684   32020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:25:03.532747   32020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:25:03.571597   32020 cri.go:89] found id: ""
	I1028 17:25:03.571655   32020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:25:03.581447   32020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:25:03.590775   32020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:25:03.599971   32020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:25:03.599983   32020 kubeadm.go:157] found existing configuration files:
	
	I1028 17:25:03.600011   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:25:03.608531   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:25:03.608565   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:25:03.617452   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:25:03.626079   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:25:03.626124   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:25:03.635124   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.644097   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:25:03.644143   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.653605   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:25:03.662453   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:25:03.662497   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:25:03.671488   32020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 17:25:03.865602   32020 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:25:14.531712   32020 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:25:14.531787   32020 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:25:14.531884   32020 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:25:14.532023   32020 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:25:14.532157   32020 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:25:14.532250   32020 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:25:14.533662   32020 out.go:235]   - Generating certificates and keys ...
	I1028 17:25:14.533743   32020 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:25:14.533841   32020 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:25:14.533931   32020 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:25:14.534016   32020 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:25:14.534080   32020 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:25:14.534133   32020 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:25:14.534179   32020 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:25:14.534283   32020 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534363   32020 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:25:14.534530   32020 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534620   32020 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:25:14.534728   32020 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:25:14.534800   32020 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:25:14.534868   32020 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:25:14.534934   32020 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:25:14.535013   32020 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:25:14.535092   32020 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:25:14.535200   32020 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:25:14.535281   32020 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:25:14.535399   32020 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:25:14.535478   32020 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:25:14.537017   32020 out.go:235]   - Booting up control plane ...
	I1028 17:25:14.537115   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:25:14.537184   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:25:14.537257   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:25:14.537408   32020 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:25:14.537527   32020 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:25:14.537591   32020 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:25:14.537728   32020 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:25:14.537862   32020 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:25:14.537919   32020 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001240837s
	I1028 17:25:14.537979   32020 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:25:14.538029   32020 kubeadm.go:310] [api-check] The API server is healthy after 5.745465318s
	I1028 17:25:14.538126   32020 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:25:14.538233   32020 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:25:14.538314   32020 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:25:14.538487   32020 kubeadm.go:310] [mark-control-plane] Marking the node ha-381619 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:25:14.538537   32020 kubeadm.go:310] [bootstrap-token] Using token: z48g6f.v3e9buj5ot2drke2
	I1028 17:25:14.539818   32020 out.go:235]   - Configuring RBAC rules ...
	I1028 17:25:14.539934   32020 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:25:14.540010   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:25:14.540140   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:25:14.540310   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:25:14.540484   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:25:14.540575   32020 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:25:14.540725   32020 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:25:14.540796   32020 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:25:14.540853   32020 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:25:14.540862   32020 kubeadm.go:310] 
	I1028 17:25:14.540934   32020 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:25:14.540941   32020 kubeadm.go:310] 
	I1028 17:25:14.541053   32020 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:25:14.541063   32020 kubeadm.go:310] 
	I1028 17:25:14.541098   32020 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:25:14.541149   32020 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:25:14.541207   32020 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:25:14.541220   32020 kubeadm.go:310] 
	I1028 17:25:14.541267   32020 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:25:14.541273   32020 kubeadm.go:310] 
	I1028 17:25:14.541311   32020 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:25:14.541317   32020 kubeadm.go:310] 
	I1028 17:25:14.541391   32020 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:25:14.541462   32020 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:25:14.541520   32020 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:25:14.541526   32020 kubeadm.go:310] 
	I1028 17:25:14.541594   32020 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:25:14.541676   32020 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:25:14.541684   32020 kubeadm.go:310] 
	I1028 17:25:14.541772   32020 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.541903   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 17:25:14.541939   32020 kubeadm.go:310] 	--control-plane 
	I1028 17:25:14.541952   32020 kubeadm.go:310] 
	I1028 17:25:14.542037   32020 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:25:14.542044   32020 kubeadm.go:310] 
	I1028 17:25:14.542111   32020 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.542209   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 17:25:14.542219   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:14.542223   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:14.543763   32020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 17:25:14.544966   32020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 17:25:14.550724   32020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 17:25:14.550742   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 17:25:14.570257   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 17:25:14.924676   32020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:25:14.924729   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:14.924751   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619 minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=true
	I1028 17:25:14.954780   32020 ops.go:34] apiserver oom_adj: -16
	I1028 17:25:15.130305   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:15.631369   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.131137   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.631423   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.131390   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.226452   32020 kubeadm.go:1113] duration metric: took 2.301774809s to wait for elevateKubeSystemPrivileges
	I1028 17:25:17.226483   32020 kubeadm.go:394] duration metric: took 13.693888567s to StartCluster
	I1028 17:25:17.226504   32020 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.226586   32020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.227504   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.227753   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:25:17.227749   32020 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:17.227776   32020 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 17:25:17.227845   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:25:17.227858   32020 addons.go:69] Setting storage-provisioner=true in profile "ha-381619"
	I1028 17:25:17.227896   32020 addons.go:234] Setting addon storage-provisioner=true in "ha-381619"
	I1028 17:25:17.227912   32020 addons.go:69] Setting default-storageclass=true in profile "ha-381619"
	I1028 17:25:17.227947   32020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-381619"
	I1028 17:25:17.228016   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:17.227925   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.228398   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228444   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.228490   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228533   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.243165   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I1028 17:25:17.243382   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I1028 17:25:17.243612   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.243827   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.244081   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244106   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244338   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244363   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244419   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244705   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244874   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.244986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.245028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.246886   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.247245   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 17:25:17.248034   32020 addons.go:234] Setting addon default-storageclass=true in "ha-381619"
	I1028 17:25:17.248080   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.248440   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.248495   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.248686   32020 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 17:25:17.259449   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I1028 17:25:17.259906   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.260429   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.260457   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.260757   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.260953   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.262554   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.262967   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I1028 17:25:17.263363   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.263726   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.263747   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.264078   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.264715   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.264763   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.264944   32020 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:25:17.266586   32020 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.266605   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:25:17.266623   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.269507   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.269884   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.269905   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.270038   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.270201   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.270351   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.270481   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.279872   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I1028 17:25:17.280334   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.280920   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.280938   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.281336   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.281528   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.283217   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.283405   32020 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.283421   32020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:25:17.283436   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.285906   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286319   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.286352   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286428   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.286601   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.286754   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.286885   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.359502   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:25:17.440263   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.482707   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.757670   32020 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 17:25:17.987134   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987176   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987203   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987222   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987446   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987453   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987512   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987532   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987544   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987486   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987487   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987697   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987716   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987723   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987752   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987764   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987811   32020 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 17:25:17.987831   32020 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 17:25:17.987933   32020 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 17:25:17.987946   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:17.987957   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:17.987961   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:17.988187   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.988302   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.988326   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.005294   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:25:18.006136   32020 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 17:25:18.006153   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:18.006163   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:18.006169   32020 round_trippers.go:473]     Content-Type: application/json
	I1028 17:25:18.006173   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:18.009564   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:25:18.009782   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:18.009793   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:18.010026   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:18.010041   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.010063   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:18.011483   32020 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 17:25:18.012573   32020 addons.go:510] duration metric: took 784.803587ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 17:25:18.012609   32020 start.go:246] waiting for cluster config update ...
	I1028 17:25:18.012623   32020 start.go:255] writing updated cluster config ...
	I1028 17:25:18.013902   32020 out.go:201] 
	I1028 17:25:18.015058   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:18.015120   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.016447   32020 out.go:177] * Starting "ha-381619-m02" control-plane node in "ha-381619" cluster
	I1028 17:25:18.017519   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:25:18.017534   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:25:18.017609   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:25:18.017619   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:25:18.017672   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.017831   32020 start.go:360] acquireMachinesLock for ha-381619-m02: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:25:18.017871   32020 start.go:364] duration metric: took 23.784µs to acquireMachinesLock for "ha-381619-m02"
	I1028 17:25:18.017886   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:18.017946   32020 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 17:25:18.019437   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:25:18.019500   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:18.019529   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:18.033319   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I1028 17:25:18.033727   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:18.034182   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:18.034200   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:18.034550   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:18.034715   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:18.034872   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:18.035033   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:25:18.035060   32020 client.go:168] LocalClient.Create starting
	I1028 17:25:18.035096   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:25:18.035126   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035142   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035187   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:25:18.035204   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035216   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035230   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:25:18.035237   32020 main.go:141] libmachine: (ha-381619-m02) Calling .PreCreateCheck
	I1028 17:25:18.035397   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:18.035746   32020 main.go:141] libmachine: Creating machine...
	I1028 17:25:18.035760   32020 main.go:141] libmachine: (ha-381619-m02) Calling .Create
	I1028 17:25:18.035901   32020 main.go:141] libmachine: (ha-381619-m02) Creating KVM machine...
	I1028 17:25:18.037157   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing default KVM network
	I1028 17:25:18.037313   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing private KVM network mk-ha-381619
	I1028 17:25:18.037431   32020 main.go:141] libmachine: (ha-381619-m02) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.037482   32020 main.go:141] libmachine: (ha-381619-m02) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:25:18.037542   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.037441   32379 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.037604   32020 main.go:141] libmachine: (ha-381619-m02) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:25:18.305482   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.305364   32379 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa...
	I1028 17:25:18.398014   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.397913   32379 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk...
	I1028 17:25:18.398067   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing magic tar header
	I1028 17:25:18.398088   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing SSH key tar header
	I1028 17:25:18.398095   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.398018   32379 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.398114   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02
	I1028 17:25:18.398136   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:25:18.398156   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 (perms=drwx------)
	I1028 17:25:18.398166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.398180   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:25:18.398187   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:25:18.398194   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:25:18.398201   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home
	I1028 17:25:18.398207   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Skipping /home - not owner
	I1028 17:25:18.398217   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:25:18.398254   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:25:18.398277   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:25:18.398289   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:25:18.398304   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:25:18.398338   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:18.399119   32020 main.go:141] libmachine: (ha-381619-m02) define libvirt domain using xml: 
	I1028 17:25:18.399128   32020 main.go:141] libmachine: (ha-381619-m02) <domain type='kvm'>
	I1028 17:25:18.399133   32020 main.go:141] libmachine: (ha-381619-m02)   <name>ha-381619-m02</name>
	I1028 17:25:18.399138   32020 main.go:141] libmachine: (ha-381619-m02)   <memory unit='MiB'>2200</memory>
	I1028 17:25:18.399142   32020 main.go:141] libmachine: (ha-381619-m02)   <vcpu>2</vcpu>
	I1028 17:25:18.399146   32020 main.go:141] libmachine: (ha-381619-m02)   <features>
	I1028 17:25:18.399154   32020 main.go:141] libmachine: (ha-381619-m02)     <acpi/>
	I1028 17:25:18.399160   32020 main.go:141] libmachine: (ha-381619-m02)     <apic/>
	I1028 17:25:18.399167   32020 main.go:141] libmachine: (ha-381619-m02)     <pae/>
	I1028 17:25:18.399171   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399177   32020 main.go:141] libmachine: (ha-381619-m02)   </features>
	I1028 17:25:18.399183   32020 main.go:141] libmachine: (ha-381619-m02)   <cpu mode='host-passthrough'>
	I1028 17:25:18.399188   32020 main.go:141] libmachine: (ha-381619-m02)   
	I1028 17:25:18.399194   32020 main.go:141] libmachine: (ha-381619-m02)   </cpu>
	I1028 17:25:18.399199   32020 main.go:141] libmachine: (ha-381619-m02)   <os>
	I1028 17:25:18.399206   32020 main.go:141] libmachine: (ha-381619-m02)     <type>hvm</type>
	I1028 17:25:18.399211   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='cdrom'/>
	I1028 17:25:18.399223   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='hd'/>
	I1028 17:25:18.399234   32020 main.go:141] libmachine: (ha-381619-m02)     <bootmenu enable='no'/>
	I1028 17:25:18.399255   32020 main.go:141] libmachine: (ha-381619-m02)   </os>
	I1028 17:25:18.399268   32020 main.go:141] libmachine: (ha-381619-m02)   <devices>
	I1028 17:25:18.399274   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='cdrom'>
	I1028 17:25:18.399282   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/boot2docker.iso'/>
	I1028 17:25:18.399289   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hdc' bus='scsi'/>
	I1028 17:25:18.399293   32020 main.go:141] libmachine: (ha-381619-m02)       <readonly/>
	I1028 17:25:18.399299   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399305   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='disk'>
	I1028 17:25:18.399316   32020 main.go:141] libmachine: (ha-381619-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:25:18.399348   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk'/>
	I1028 17:25:18.399365   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hda' bus='virtio'/>
	I1028 17:25:18.399403   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399425   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399439   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='mk-ha-381619'/>
	I1028 17:25:18.399446   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399454   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399464   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399473   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='default'/>
	I1028 17:25:18.399483   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399491   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399505   32020 main.go:141] libmachine: (ha-381619-m02)     <serial type='pty'>
	I1028 17:25:18.399516   32020 main.go:141] libmachine: (ha-381619-m02)       <target port='0'/>
	I1028 17:25:18.399525   32020 main.go:141] libmachine: (ha-381619-m02)     </serial>
	I1028 17:25:18.399531   32020 main.go:141] libmachine: (ha-381619-m02)     <console type='pty'>
	I1028 17:25:18.399536   32020 main.go:141] libmachine: (ha-381619-m02)       <target type='serial' port='0'/>
	I1028 17:25:18.399544   32020 main.go:141] libmachine: (ha-381619-m02)     </console>
	I1028 17:25:18.399554   32020 main.go:141] libmachine: (ha-381619-m02)     <rng model='virtio'>
	I1028 17:25:18.399564   32020 main.go:141] libmachine: (ha-381619-m02)       <backend model='random'>/dev/random</backend>
	I1028 17:25:18.399578   32020 main.go:141] libmachine: (ha-381619-m02)     </rng>
	I1028 17:25:18.399588   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399596   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399604   32020 main.go:141] libmachine: (ha-381619-m02)   </devices>
	I1028 17:25:18.399613   32020 main.go:141] libmachine: (ha-381619-m02) </domain>
	I1028 17:25:18.399622   32020 main.go:141] libmachine: (ha-381619-m02) 
	I1028 17:25:18.405867   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:26:9b:68 in network default
	I1028 17:25:18.406379   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring networks are active...
	I1028 17:25:18.406395   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:18.407090   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network default is active
	I1028 17:25:18.407385   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network mk-ha-381619 is active
	I1028 17:25:18.407717   32020 main.go:141] libmachine: (ha-381619-m02) Getting domain xml...
	I1028 17:25:18.408378   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:19.597563   32020 main.go:141] libmachine: (ha-381619-m02) Waiting to get IP...
	I1028 17:25:19.598384   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.598740   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.598789   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.598740   32379 retry.go:31] will retry after 190.903064ms: waiting for machine to come up
	I1028 17:25:19.791078   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.791557   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.791589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.791498   32379 retry.go:31] will retry after 306.415198ms: waiting for machine to come up
	I1028 17:25:20.099990   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.100410   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.100438   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.100363   32379 retry.go:31] will retry after 461.052427ms: waiting for machine to come up
	I1028 17:25:20.562787   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.563226   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.563254   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.563181   32379 retry.go:31] will retry after 399.454176ms: waiting for machine to come up
	I1028 17:25:20.964734   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.965138   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.965168   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.965088   32379 retry.go:31] will retry after 468.537228ms: waiting for machine to come up
	I1028 17:25:21.435633   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:21.436036   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:21.436065   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:21.435978   32379 retry.go:31] will retry after 901.623232ms: waiting for machine to come up
	I1028 17:25:22.338882   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:22.339214   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:22.339251   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:22.339170   32379 retry.go:31] will retry after 1.174231376s: waiting for machine to come up
	I1028 17:25:23.514567   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:23.515122   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:23.515148   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:23.515075   32379 retry.go:31] will retry after 1.47285995s: waiting for machine to come up
	I1028 17:25:24.989376   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:24.989742   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:24.989772   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:24.989693   32379 retry.go:31] will retry after 1.395202662s: waiting for machine to come up
	I1028 17:25:26.387051   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:26.387470   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:26.387497   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:26.387419   32379 retry.go:31] will retry after 1.648219706s: waiting for machine to come up
	I1028 17:25:28.036842   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:28.037349   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:28.037375   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:28.037295   32379 retry.go:31] will retry after 2.189322328s: waiting for machine to come up
	I1028 17:25:30.229493   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:30.229820   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:30.229841   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:30.229780   32379 retry.go:31] will retry after 2.90274213s: waiting for machine to come up
	I1028 17:25:33.134730   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:33.135076   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:33.135092   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:33.135034   32379 retry.go:31] will retry after 4.079584337s: waiting for machine to come up
	I1028 17:25:37.219140   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:37.219485   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:37.219505   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:37.219442   32379 retry.go:31] will retry after 4.856708442s: waiting for machine to come up
	I1028 17:25:42.077346   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077745   32020 main.go:141] libmachine: (ha-381619-m02) Found IP for machine: 192.168.39.171
	I1028 17:25:42.077766   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has current primary IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077785   32020 main.go:141] libmachine: (ha-381619-m02) Reserving static IP address...
	I1028 17:25:42.078069   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "ha-381619-m02", mac: "52:54:00:ab:1d:c9", ip: "192.168.39.171"} in network mk-ha-381619
	I1028 17:25:42.145216   32020 main.go:141] libmachine: (ha-381619-m02) Reserved static IP address: 192.168.39.171
	I1028 17:25:42.145248   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:42.145256   32020 main.go:141] libmachine: (ha-381619-m02) Waiting for SSH to be available...
	I1028 17:25:42.147449   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.147844   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619
	I1028 17:25:42.147868   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:ab:1d:c9
	I1028 17:25:42.148011   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:42.148037   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:42.148079   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:42.148093   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:42.148106   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:42.151405   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:25:42.151422   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:25:42.151430   32020 main.go:141] libmachine: (ha-381619-m02) DBG | command : exit 0
	I1028 17:25:42.151434   32020 main.go:141] libmachine: (ha-381619-m02) DBG | err     : exit status 255
	I1028 17:25:42.151457   32020 main.go:141] libmachine: (ha-381619-m02) DBG | output  : 
	I1028 17:25:45.153548   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:45.155666   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156001   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.156026   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156153   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:45.156174   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:45.156209   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:45.156220   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:45.156228   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:45.284123   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 17:25:45.284412   32020 main.go:141] libmachine: (ha-381619-m02) KVM machine creation complete!
	I1028 17:25:45.284721   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:45.285293   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285476   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285636   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:25:45.285651   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetState
	I1028 17:25:45.286839   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:25:45.286853   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:25:45.286874   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:25:45.286883   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.289343   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289699   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.289732   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289877   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.290050   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290180   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290283   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.290450   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.290659   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.290673   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:25:45.403429   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.403453   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:25:45.403460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.406169   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406520   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.406547   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406664   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.406833   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.406968   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.407121   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.407274   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.407471   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.407486   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:25:45.516915   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:25:45.516972   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:25:45.516982   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:25:45.516996   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517247   32020 buildroot.go:166] provisioning hostname "ha-381619-m02"
	I1028 17:25:45.517269   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.520442   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.520895   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.520951   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.521136   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.521306   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521441   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521550   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.521679   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.521869   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.521885   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m02 && echo "ha-381619-m02" | sudo tee /etc/hostname
	I1028 17:25:45.647896   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m02
	
	I1028 17:25:45.647923   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.650559   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.650915   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.650946   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.651119   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.651299   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651606   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.651778   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.651948   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.651967   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:25:45.773264   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.773293   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:25:45.773315   32020 buildroot.go:174] setting up certificates
	I1028 17:25:45.773322   32020 provision.go:84] configureAuth start
	I1028 17:25:45.773330   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.773552   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:45.776602   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.776920   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.776944   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.777092   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.779167   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779415   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.779440   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779566   32020 provision.go:143] copyHostCerts
	I1028 17:25:45.779590   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779620   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:25:45.779629   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779712   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:25:45.779784   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779808   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:25:45.779815   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779839   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:25:45.779883   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779899   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:25:45.779905   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779925   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:25:45.779969   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m02 san=[127.0.0.1 192.168.39.171 ha-381619-m02 localhost minikube]
	I1028 17:25:45.949948   32020 provision.go:177] copyRemoteCerts
	I1028 17:25:45.950001   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:25:45.950022   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.952596   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.952955   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.953006   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.953158   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.953335   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.953473   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.953584   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.038279   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:25:46.038337   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:25:46.061947   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:25:46.062008   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:25:46.084393   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:25:46.084451   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:25:46.107114   32020 provision.go:87] duration metric: took 333.781683ms to configureAuth
	I1028 17:25:46.107142   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:25:46.107303   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:46.107385   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.110324   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110650   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.110678   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110841   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.111029   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111171   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111337   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.111521   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.111668   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.111682   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:25:46.333665   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:25:46.333687   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:25:46.333695   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetURL
	I1028 17:25:46.335063   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using libvirt version 6000000
	I1028 17:25:46.337491   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.337821   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.337850   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.338022   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:25:46.338038   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:25:46.338046   32020 client.go:171] duration metric: took 28.302974924s to LocalClient.Create
	I1028 17:25:46.338089   32020 start.go:167] duration metric: took 28.303046594s to libmachine.API.Create "ha-381619"
	I1028 17:25:46.338103   32020 start.go:293] postStartSetup for "ha-381619-m02" (driver="kvm2")
	I1028 17:25:46.338115   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:25:46.338137   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.338375   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:25:46.338401   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.340858   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341271   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.341298   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.341568   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.341713   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.341825   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.426689   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:25:46.431014   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:25:46.431038   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:25:46.431111   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:25:46.431208   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:25:46.431224   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:25:46.431391   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:25:46.440073   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:46.463120   32020 start.go:296] duration metric: took 125.005816ms for postStartSetup
	I1028 17:25:46.463168   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:46.463762   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.466198   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466494   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.466531   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466725   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:46.466921   32020 start.go:128] duration metric: took 28.448963909s to createHost
	I1028 17:25:46.466949   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.469249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469565   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.469589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469704   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.469861   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.469984   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.470143   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.470307   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.470485   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.470498   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:25:46.580856   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136346.562587281
	
	I1028 17:25:46.580878   32020 fix.go:216] guest clock: 1730136346.562587281
	I1028 17:25:46.580887   32020 fix.go:229] Guest: 2024-10-28 17:25:46.562587281 +0000 UTC Remote: 2024-10-28 17:25:46.466934782 +0000 UTC m=+73.797903078 (delta=95.652499ms)
	I1028 17:25:46.580901   32020 fix.go:200] guest clock delta is within tolerance: 95.652499ms
	I1028 17:25:46.580907   32020 start.go:83] releasing machines lock for "ha-381619-m02", held for 28.563026837s
	I1028 17:25:46.580924   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.581186   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.583856   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.584218   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.584249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.586494   32020 out.go:177] * Found network options:
	I1028 17:25:46.587894   32020 out.go:177]   - NO_PROXY=192.168.39.230
	W1028 17:25:46.589029   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589070   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589532   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589694   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589788   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:25:46.589827   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	W1028 17:25:46.589854   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589924   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:25:46.589942   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.592456   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592681   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592853   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.592873   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592998   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593129   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.593189   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.593257   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593327   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593495   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593488   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.593663   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593796   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.834104   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:25:46.840249   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:25:46.840309   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:25:46.857442   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:25:46.857462   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:25:46.857520   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:25:46.874062   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:25:46.887622   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:25:46.887678   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:25:46.901054   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:25:46.914614   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:25:47.030203   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:25:47.173397   32020 docker.go:233] disabling docker service ...
	I1028 17:25:47.173471   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:25:47.187602   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:25:47.200124   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:25:47.343002   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:25:47.463446   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:25:47.477391   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:25:47.495284   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:25:47.495336   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.505232   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:25:47.505290   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.515205   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.524903   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.534665   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:25:47.544548   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.554185   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.570492   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.580150   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:25:47.588959   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:25:47.588998   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:25:47.602144   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:25:47.611274   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:47.728237   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:25:47.819661   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:25:47.819739   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:25:47.825086   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:25:47.825133   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:25:47.828919   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:25:47.865608   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:25:47.865686   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.891971   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.920487   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:25:47.921941   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:25:47.923245   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:47.926002   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926296   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:47.926314   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926539   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:25:47.930572   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:47.943132   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:25:47.943291   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:47.943533   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.943566   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.957947   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I1028 17:25:47.958254   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.958709   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.958727   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.959022   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.959199   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:47.960488   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:47.960756   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.960791   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.974636   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I1028 17:25:47.975037   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.975478   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.975496   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.975773   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.975952   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:47.976140   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.171
	I1028 17:25:47.976153   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:47.976170   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:47.976307   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:47.976364   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:47.976377   32020 certs.go:256] generating profile certs ...
	I1028 17:25:47.976489   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:47.976518   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6
	I1028 17:25:47.976537   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.254]
	I1028 17:25:48.173298   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 ...
	I1028 17:25:48.173326   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6: {Name:mkf5ce350ef4737e80e11fe080b891074a0af9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173482   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 ...
	I1028 17:25:48.173493   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6: {Name:mk4892e87f7052cc8a58e00369d3170cecec3e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173560   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:48.173681   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:48.173810   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:48.173826   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:48.173840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:48.173854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:48.173866   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:48.173879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:48.173891   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:48.173902   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:48.173913   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:48.173957   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:48.173999   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:48.174009   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:48.174030   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:48.174051   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:48.174071   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:48.174117   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:48.174144   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.174158   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.174169   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.174198   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:48.177148   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177545   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:48.177579   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177737   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:48.177910   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:48.178048   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:48.178158   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:48.248817   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:25:48.254098   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:25:48.264499   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:25:48.268575   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:25:48.278929   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:25:48.283180   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:25:48.292856   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:25:48.296876   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:25:48.306132   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:25:48.310003   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:25:48.319418   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:25:48.323887   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:25:48.335408   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:48.360541   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:48.384095   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:48.407120   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:48.429601   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 17:25:48.452108   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:25:48.474717   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:48.497519   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:48.519884   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:48.542530   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:48.565246   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:48.587411   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:25:48.603353   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:25:48.618794   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:25:48.634198   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:25:48.649902   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:25:48.665540   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:25:48.680907   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:25:48.697446   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:48.703204   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:48.713589   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718016   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718162   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.723740   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:48.734297   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:48.744539   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748653   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748709   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.754164   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:48.764209   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:48.774379   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778691   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778734   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.784288   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:48.794987   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:48.799006   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:48.799053   32020 kubeadm.go:934] updating node {m02 192.168.39.171 8443 v1.31.2 crio true true} ...
	I1028 17:25:48.799121   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:48.799142   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:48.799168   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:48.823470   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:48.823527   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:48.823569   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.835145   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:25:48.835188   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.844460   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:25:48.844491   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844545   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844552   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 17:25:48.844586   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 17:25:48.848931   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:25:48.848960   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:25:49.845765   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.845846   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.851022   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:25:49.851049   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:25:49.995196   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:25:50.018003   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.018112   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.028108   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:25:50.028154   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:25:50.413235   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:25:50.422462   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 17:25:50.439863   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:50.457114   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:25:50.474256   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:50.477946   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:50.489942   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:50.615829   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:50.634721   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:50.635033   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:50.635082   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:50.649391   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I1028 17:25:50.649767   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:50.650191   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:50.650209   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:50.650503   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:50.650660   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:50.650788   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:50.650874   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:25:50.650889   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:50.653655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654061   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:50.654087   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654224   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:50.654401   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:50.654535   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:50.654636   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:50.789658   32020 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:50.789699   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443"
	I1028 17:26:12.167714   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443": (21.377987897s)
	I1028 17:26:12.167759   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:26:12.604075   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m02 minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:26:12.730286   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:26:12.839048   32020 start.go:319] duration metric: took 22.188254958s to joinCluster
	I1028 17:26:12.839133   32020 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:12.839439   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:12.840330   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:26:12.841472   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:26:13.041048   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:26:13.058928   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:26:13.059251   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:26:13.059331   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:26:13.059574   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:13.059667   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.059677   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.059688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.059694   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.077343   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:26:13.560169   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.560188   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.560196   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.560200   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.573882   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:14.060794   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.060818   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.060828   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.060835   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.068335   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:14.560535   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.560554   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.560562   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.560567   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.564008   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:15.060016   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.060055   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.060066   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.060072   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.064096   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:15.064637   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:15.559999   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.560030   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.560041   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.560046   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.563431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.059828   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.059852   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.059862   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.059867   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.063732   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.560697   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.560722   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.560733   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.560739   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.564261   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:17.060671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.060698   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.060711   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.060718   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.064995   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:17.066041   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:17.560713   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.560732   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.560749   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.563531   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:18.060093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.060116   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.060127   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.060135   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.064122   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:18.559857   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.559879   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.559887   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.559898   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.563832   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:19.059842   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.059867   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.059879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.059884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.065030   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:19.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.559871   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.559879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.559884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.562800   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:19.563587   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:20.059873   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.059895   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.059905   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.059912   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.073315   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:20.560212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.560231   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.560239   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.560243   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.650492   32020 round_trippers.go:574] Response Status: 200 OK in 90 milliseconds
	I1028 17:26:21.059937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.059963   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.059974   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.059979   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.064508   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:21.560559   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.560581   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.560590   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.560594   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.563714   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:21.564443   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:22.059724   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.059744   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.059752   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.059757   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.063391   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:22.560710   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.560731   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.560738   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.563846   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.060524   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.060544   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.060554   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.060561   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.064448   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.560417   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.560438   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.560447   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.560451   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.563535   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.060636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.060664   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.060675   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.060683   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.064043   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.064451   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:24.559868   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.559899   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.559907   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.559910   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.562925   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:25.059880   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.059902   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.059910   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.059915   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.063972   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:25.559872   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.559894   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.559901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.559905   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.563081   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:26.060748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.060770   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.060782   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.060788   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.064990   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:26.065576   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:26.559841   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.559863   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.559871   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.559876   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.562740   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:27.059746   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.059768   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.059775   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.059779   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.063135   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:27.560126   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.560145   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.560153   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.560158   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.563096   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:28.060723   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.060746   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.060757   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.060763   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.065003   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:28.560732   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.560757   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.560767   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.560774   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.563965   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:28.564617   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:29.059876   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.059903   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.059914   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.059919   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.067282   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:29.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.559872   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.559880   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.559883   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.562804   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:30.059831   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.059853   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.059867   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.059875   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.063855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:30.560631   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.560653   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.560665   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.560670   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.563630   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:31.059907   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.059925   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.059933   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.059938   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.064319   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:31.065078   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:31.560248   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.560271   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.560278   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.560282   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.563146   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:32.059755   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.059779   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.059790   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.059796   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.065145   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:32.560006   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.560026   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.560034   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.560038   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.563453   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.060614   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.060633   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.060641   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.060647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.064544   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.066373   32020 node_ready.go:49] node "ha-381619-m02" has status "Ready":"True"
	I1028 17:26:33.066389   32020 node_ready.go:38] duration metric: took 20.006796944s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:33.066397   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:33.066462   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:33.066470   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.066477   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.066482   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.074203   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:33.082515   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.082586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:26:33.082595   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.082602   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.082607   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.095144   32020 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 17:26:33.095832   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.095846   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.095854   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.095858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.101134   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:33.101733   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.101757   32020 pod_ready.go:82] duration metric: took 19.21928ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101770   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101833   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:26:33.101844   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.101853   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.101858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.105945   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.108337   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.108355   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.108367   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.108372   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.113026   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.113662   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.113683   32020 pod_ready.go:82] duration metric: took 11.906137ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113694   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113752   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:26:33.113762   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.113774   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.113782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.123002   32020 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 17:26:33.123632   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.123647   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.123654   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.123658   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.127965   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.128570   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.128593   32020 pod_ready.go:82] duration metric: took 14.890353ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128604   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128669   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:26:33.128680   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.128690   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.128695   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.132736   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.133266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.133282   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.133291   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.133297   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.135365   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.135735   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.135750   32020 pod_ready.go:82] duration metric: took 7.136636ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.135762   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.261122   32020 request.go:632] Waited for 125.293136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261209   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261217   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.261226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.261234   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.263967   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.461031   32020 request.go:632] Waited for 196.380501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461114   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461126   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.461137   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.461148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.465245   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.465839   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.465854   32020 pod_ready.go:82] duration metric: took 330.085581ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.465863   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.661130   32020 request.go:632] Waited for 195.210858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661218   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.661226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.661231   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.664592   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.861613   32020 request.go:632] Waited for 196.398754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861693   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.861703   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.861708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.865300   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.865923   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.865943   32020 pod_ready.go:82] duration metric: took 400.074085ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.865954   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.061082   32020 request.go:632] Waited for 195.035949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061146   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061154   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.061164   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.061177   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.065243   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:34.261295   32020 request.go:632] Waited for 195.377372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261362   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261369   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.261377   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.261384   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.264122   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:34.264806   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.264824   32020 pod_ready.go:82] duration metric: took 398.860925ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.264834   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.461015   32020 request.go:632] Waited for 196.107238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461086   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461092   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.461099   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.461107   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.464532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.661679   32020 request.go:632] Waited for 196.369344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661755   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.661763   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.661769   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.664905   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.665450   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.665471   32020 pod_ready.go:82] duration metric: took 400.628457ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.665485   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.861555   32020 request.go:632] Waited for 195.998426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861607   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861612   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.861619   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.861625   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.865054   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.061002   32020 request.go:632] Waited for 195.260133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061074   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061081   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.061090   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.061103   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.067316   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:35.067855   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.067872   32020 pod_ready.go:82] duration metric: took 402.381503ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.067883   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.261021   32020 request.go:632] Waited for 193.06469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261075   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261080   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.261087   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.261091   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.264532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.461647   32020 request.go:632] Waited for 196.379594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461699   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461704   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.461712   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.461716   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.464708   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:35.465310   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.465326   32020 pod_ready.go:82] duration metric: took 397.438256ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.465336   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.660832   32020 request.go:632] Waited for 195.429914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660887   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660892   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.660901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.660906   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.664825   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.861091   32020 request.go:632] Waited for 195.400527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861176   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861185   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.861193   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.861199   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.864874   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.865496   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.865512   32020 pod_ready.go:82] duration metric: took 400.170514ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.865524   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.061640   32020 request.go:632] Waited for 196.040174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061702   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.061709   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.061712   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.067912   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:36.260741   32020 request.go:632] Waited for 192.270672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260796   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260801   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.260808   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.260811   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.264431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.265062   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:36.265078   32020 pod_ready.go:82] duration metric: took 399.548106ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.265089   32020 pod_ready.go:39] duration metric: took 3.19868237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:36.265105   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:26:36.265162   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:26:36.280395   32020 api_server.go:72] duration metric: took 23.441229274s to wait for apiserver process to appear ...
	I1028 17:26:36.280422   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:26:36.280444   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:26:36.284951   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:26:36.285015   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:26:36.285023   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.285030   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.285034   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.285954   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:26:36.286036   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:26:36.286049   32020 api_server.go:131] duration metric: took 5.621129ms to wait for apiserver health ...
	I1028 17:26:36.286055   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:26:36.461480   32020 request.go:632] Waited for 175.36266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461560   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461566   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.461573   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.461579   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.465870   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.471332   32020 system_pods.go:59] 17 kube-system pods found
	I1028 17:26:36.471364   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.471372   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.471378   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.471384   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.471389   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.471394   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.471398   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.471404   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.471410   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.471415   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.471420   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.471423   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.471427   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.471431   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.471439   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.471443   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.471447   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.471452   32020 system_pods.go:74] duration metric: took 185.392371ms to wait for pod list to return data ...
	I1028 17:26:36.471461   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:26:36.660798   32020 request.go:632] Waited for 189.265217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660858   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660865   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.660876   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.660890   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.664250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.664492   32020 default_sa.go:45] found service account: "default"
	I1028 17:26:36.664512   32020 default_sa.go:55] duration metric: took 193.044588ms for default service account to be created ...
	I1028 17:26:36.664525   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:26:36.860686   32020 request.go:632] Waited for 196.070222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860774   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860785   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.860796   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.860806   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.865017   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.869263   32020 system_pods.go:86] 17 kube-system pods found
	I1028 17:26:36.869283   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.869289   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.869294   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.869300   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.869305   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.869318   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.869324   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.869332   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.869341   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.869344   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.869348   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.869351   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.869355   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.869359   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.869362   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.869368   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.869371   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.869378   32020 system_pods.go:126] duration metric: took 204.847439ms to wait for k8s-apps to be running ...
	I1028 17:26:36.869387   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:26:36.869438   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:26:36.887558   32020 system_svc.go:56] duration metric: took 18.164041ms WaitForService to wait for kubelet
	I1028 17:26:36.887583   32020 kubeadm.go:582] duration metric: took 24.048418465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:26:36.887603   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:26:37.061041   32020 request.go:632] Waited for 173.358173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061125   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061137   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:37.061147   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:37.061157   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:37.065908   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:37.066717   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066739   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066750   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066754   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066758   32020 node_conditions.go:105] duration metric: took 179.146781ms to run NodePressure ...
	I1028 17:26:37.066780   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:26:37.066813   32020 start.go:255] writing updated cluster config ...
	I1028 17:26:37.068764   32020 out.go:201] 
	I1028 17:26:37.070024   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:37.070105   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.071682   32020 out.go:177] * Starting "ha-381619-m03" control-plane node in "ha-381619" cluster
	I1028 17:26:37.072951   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:26:37.072974   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:26:37.073061   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:26:37.073071   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:26:37.073157   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.073328   32020 start.go:360] acquireMachinesLock for ha-381619-m03: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:26:37.073367   32020 start.go:364] duration metric: took 22.448µs to acquireMachinesLock for "ha-381619-m03"
	I1028 17:26:37.073383   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:37.073468   32020 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 17:26:37.074992   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:26:37.075063   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:26:37.075098   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:26:37.089635   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I1028 17:26:37.090045   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:26:37.090591   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:26:37.090617   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:26:37.090932   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:26:37.091131   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:26:37.091290   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:26:37.091438   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:26:37.091470   32020 client.go:168] LocalClient.Create starting
	I1028 17:26:37.091506   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:26:37.091543   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091562   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091624   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:26:37.091649   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091665   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091691   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:26:37.091702   32020 main.go:141] libmachine: (ha-381619-m03) Calling .PreCreateCheck
	I1028 17:26:37.091853   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:26:37.092216   32020 main.go:141] libmachine: Creating machine...
	I1028 17:26:37.092231   32020 main.go:141] libmachine: (ha-381619-m03) Calling .Create
	I1028 17:26:37.092346   32020 main.go:141] libmachine: (ha-381619-m03) Creating KVM machine...
	I1028 17:26:37.093689   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing default KVM network
	I1028 17:26:37.093825   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing private KVM network mk-ha-381619
	I1028 17:26:37.094015   32020 main.go:141] libmachine: (ha-381619-m03) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.094041   32020 main.go:141] libmachine: (ha-381619-m03) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:26:37.094128   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.093979   32807 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.094183   32020 main.go:141] libmachine: (ha-381619-m03) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:26:37.334476   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.334350   32807 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa...
	I1028 17:26:37.512343   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512238   32807 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk...
	I1028 17:26:37.512368   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing magic tar header
	I1028 17:26:37.512408   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing SSH key tar header
	I1028 17:26:37.512432   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512349   32807 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.512450   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03
	I1028 17:26:37.512458   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:26:37.512478   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.512486   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:26:37.512517   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:26:37.512536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:26:37.512545   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 (perms=drwx------)
	I1028 17:26:37.512553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home
	I1028 17:26:37.512565   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:26:37.512581   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:26:37.512594   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:26:37.512609   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:26:37.512619   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Skipping /home - not owner
	I1028 17:26:37.512629   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:26:37.512638   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:37.513512   32020 main.go:141] libmachine: (ha-381619-m03) define libvirt domain using xml: 
	I1028 17:26:37.513530   32020 main.go:141] libmachine: (ha-381619-m03) <domain type='kvm'>
	I1028 17:26:37.513546   32020 main.go:141] libmachine: (ha-381619-m03)   <name>ha-381619-m03</name>
	I1028 17:26:37.513552   32020 main.go:141] libmachine: (ha-381619-m03)   <memory unit='MiB'>2200</memory>
	I1028 17:26:37.513557   32020 main.go:141] libmachine: (ha-381619-m03)   <vcpu>2</vcpu>
	I1028 17:26:37.513561   32020 main.go:141] libmachine: (ha-381619-m03)   <features>
	I1028 17:26:37.513566   32020 main.go:141] libmachine: (ha-381619-m03)     <acpi/>
	I1028 17:26:37.513572   32020 main.go:141] libmachine: (ha-381619-m03)     <apic/>
	I1028 17:26:37.513577   32020 main.go:141] libmachine: (ha-381619-m03)     <pae/>
	I1028 17:26:37.513584   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513589   32020 main.go:141] libmachine: (ha-381619-m03)   </features>
	I1028 17:26:37.513595   32020 main.go:141] libmachine: (ha-381619-m03)   <cpu mode='host-passthrough'>
	I1028 17:26:37.513600   32020 main.go:141] libmachine: (ha-381619-m03)   
	I1028 17:26:37.513606   32020 main.go:141] libmachine: (ha-381619-m03)   </cpu>
	I1028 17:26:37.513611   32020 main.go:141] libmachine: (ha-381619-m03)   <os>
	I1028 17:26:37.513617   32020 main.go:141] libmachine: (ha-381619-m03)     <type>hvm</type>
	I1028 17:26:37.513622   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='cdrom'/>
	I1028 17:26:37.513630   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='hd'/>
	I1028 17:26:37.513634   32020 main.go:141] libmachine: (ha-381619-m03)     <bootmenu enable='no'/>
	I1028 17:26:37.513638   32020 main.go:141] libmachine: (ha-381619-m03)   </os>
	I1028 17:26:37.513643   32020 main.go:141] libmachine: (ha-381619-m03)   <devices>
	I1028 17:26:37.513647   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='cdrom'>
	I1028 17:26:37.513655   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/boot2docker.iso'/>
	I1028 17:26:37.513660   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hdc' bus='scsi'/>
	I1028 17:26:37.513664   32020 main.go:141] libmachine: (ha-381619-m03)       <readonly/>
	I1028 17:26:37.513668   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513673   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='disk'>
	I1028 17:26:37.513679   32020 main.go:141] libmachine: (ha-381619-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:26:37.513689   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk'/>
	I1028 17:26:37.513697   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hda' bus='virtio'/>
	I1028 17:26:37.513728   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513752   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513762   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='mk-ha-381619'/>
	I1028 17:26:37.513777   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513799   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513818   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513832   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='default'/>
	I1028 17:26:37.513842   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513850   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513860   32020 main.go:141] libmachine: (ha-381619-m03)     <serial type='pty'>
	I1028 17:26:37.513868   32020 main.go:141] libmachine: (ha-381619-m03)       <target port='0'/>
	I1028 17:26:37.513877   32020 main.go:141] libmachine: (ha-381619-m03)     </serial>
	I1028 17:26:37.513888   32020 main.go:141] libmachine: (ha-381619-m03)     <console type='pty'>
	I1028 17:26:37.513899   32020 main.go:141] libmachine: (ha-381619-m03)       <target type='serial' port='0'/>
	I1028 17:26:37.513908   32020 main.go:141] libmachine: (ha-381619-m03)     </console>
	I1028 17:26:37.513919   32020 main.go:141] libmachine: (ha-381619-m03)     <rng model='virtio'>
	I1028 17:26:37.513932   32020 main.go:141] libmachine: (ha-381619-m03)       <backend model='random'>/dev/random</backend>
	I1028 17:26:37.513941   32020 main.go:141] libmachine: (ha-381619-m03)     </rng>
	I1028 17:26:37.513954   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513965   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513971   32020 main.go:141] libmachine: (ha-381619-m03)   </devices>
	I1028 17:26:37.513978   32020 main.go:141] libmachine: (ha-381619-m03) </domain>
	I1028 17:26:37.513992   32020 main.go:141] libmachine: (ha-381619-m03) 
	I1028 17:26:37.520796   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:6b:b8:f1 in network default
	I1028 17:26:37.521360   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring networks are active...
	I1028 17:26:37.521387   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:37.521985   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network default is active
	I1028 17:26:37.522251   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network mk-ha-381619 is active
	I1028 17:26:37.522562   32020 main.go:141] libmachine: (ha-381619-m03) Getting domain xml...
	I1028 17:26:37.523108   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:38.733507   32020 main.go:141] libmachine: (ha-381619-m03) Waiting to get IP...
	I1028 17:26:38.734445   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:38.734847   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:38.734874   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:38.734831   32807 retry.go:31] will retry after 277.511241ms: waiting for machine to come up
	I1028 17:26:39.014311   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.014705   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.014731   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.014657   32807 retry.go:31] will retry after 249.568431ms: waiting for machine to come up
	I1028 17:26:39.266003   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.266417   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.266438   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.266379   32807 retry.go:31] will retry after 332.313659ms: waiting for machine to come up
	I1028 17:26:39.599811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.600199   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.600224   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.600155   32807 retry.go:31] will retry after 498.320063ms: waiting for machine to come up
	I1028 17:26:40.099601   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.100068   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.100102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.100010   32807 retry.go:31] will retry after 620.508522ms: waiting for machine to come up
	I1028 17:26:40.721631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.722075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.722102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.722032   32807 retry.go:31] will retry after 786.320854ms: waiting for machine to come up
	I1028 17:26:41.509664   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:41.510180   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:41.510208   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:41.510141   32807 retry.go:31] will retry after 1.021116287s: waiting for machine to come up
	I1028 17:26:42.532494   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:42.532913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:42.532943   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:42.532860   32807 retry.go:31] will retry after 1.335656065s: waiting for machine to come up
	I1028 17:26:43.870447   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:43.870913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:43.870940   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:43.870865   32807 retry.go:31] will retry after 1.720265412s: waiting for machine to come up
	I1028 17:26:45.593694   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:45.594300   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:45.594326   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:45.594243   32807 retry.go:31] will retry after 1.629048478s: waiting for machine to come up
	I1028 17:26:47.224808   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:47.225182   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:47.225207   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:47.225159   32807 retry.go:31] will retry after 2.592881751s: waiting for machine to come up
	I1028 17:26:49.819232   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:49.819722   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:49.819742   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:49.819691   32807 retry.go:31] will retry after 2.406064511s: waiting for machine to come up
	I1028 17:26:52.227365   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:52.227723   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:52.227744   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:52.227706   32807 retry.go:31] will retry after 4.047640597s: waiting for machine to come up
	I1028 17:26:56.276662   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:56.277135   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:56.277158   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:56.277104   32807 retry.go:31] will retry after 4.243512083s: waiting for machine to come up
	I1028 17:27:00.523220   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523671   32020 main.go:141] libmachine: (ha-381619-m03) Found IP for machine: 192.168.39.17
	I1028 17:27:00.523698   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has current primary IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523706   32020 main.go:141] libmachine: (ha-381619-m03) Reserving static IP address...
	I1028 17:27:00.524025   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "ha-381619-m03", mac: "52:54:00:d7:8c:62", ip: "192.168.39.17"} in network mk-ha-381619
	I1028 17:27:00.592781   32020 main.go:141] libmachine: (ha-381619-m03) Reserved static IP address: 192.168.39.17
	I1028 17:27:00.592808   32020 main.go:141] libmachine: (ha-381619-m03) Waiting for SSH to be available...
	I1028 17:27:00.592817   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:00.595728   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.595996   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619
	I1028 17:27:00.596032   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:d7:8c:62
	I1028 17:27:00.596173   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:00.596195   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:00.596242   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:00.596266   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:00.596292   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:00.599869   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:27:00.599886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:27:00.599893   32020 main.go:141] libmachine: (ha-381619-m03) DBG | command : exit 0
	I1028 17:27:00.599897   32020 main.go:141] libmachine: (ha-381619-m03) DBG | err     : exit status 255
	I1028 17:27:00.599912   32020 main.go:141] libmachine: (ha-381619-m03) DBG | output  : 
	I1028 17:27:03.600719   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:03.602993   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603307   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.603342   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603475   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:03.603507   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:03.603540   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:03.603558   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:03.603573   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:03.732419   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 17:27:03.732661   32020 main.go:141] libmachine: (ha-381619-m03) KVM machine creation complete!
	I1028 17:27:03.732966   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:03.733514   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733669   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733799   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:27:03.733816   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetState
	I1028 17:27:03.734895   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:27:03.734910   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:27:03.734928   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:27:03.734939   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.737530   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.737905   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.737933   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.738103   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.738238   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738419   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738528   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.738669   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.738865   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.738879   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:27:03.843630   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:03.843655   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:27:03.843666   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.846510   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.846865   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.846886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.847091   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.847261   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847412   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847510   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.847671   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.847870   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.847884   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:27:03.953430   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:27:03.953486   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:27:03.953497   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:27:03.953508   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.953779   32020 buildroot.go:166] provisioning hostname "ha-381619-m03"
	I1028 17:27:03.953819   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.954012   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.956989   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957430   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.957456   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957613   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.957773   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.957930   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.958072   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.958232   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.958456   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.958476   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m03 && echo "ha-381619-m03" | sudo tee /etc/hostname
	I1028 17:27:04.082564   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m03
	
	I1028 17:27:04.082596   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.085190   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085543   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.085567   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.085952   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086175   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.086298   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.086473   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.086494   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:27:04.201141   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:04.201171   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:27:04.201191   32020 buildroot.go:174] setting up certificates
	I1028 17:27:04.201204   32020 provision.go:84] configureAuth start
	I1028 17:27:04.201213   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:04.201449   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.204201   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.204661   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204749   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.206751   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.207092   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207247   32020 provision.go:143] copyHostCerts
	I1028 17:27:04.207276   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207314   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:27:04.207334   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207429   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:27:04.207519   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207543   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:27:04.207552   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207589   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:27:04.207646   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207670   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:27:04.207679   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207710   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:27:04.207772   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m03 san=[127.0.0.1 192.168.39.17 ha-381619-m03 localhost minikube]
	I1028 17:27:04.311071   32020 provision.go:177] copyRemoteCerts
	I1028 17:27:04.311121   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:27:04.311145   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.313577   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.313977   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.314019   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.314168   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.314347   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.314472   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.314623   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.403135   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:27:04.403211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:27:04.427834   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:27:04.427894   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:27:04.450833   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:27:04.450900   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:27:04.473452   32020 provision.go:87] duration metric: took 272.234677ms to configureAuth
	I1028 17:27:04.473476   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:27:04.473653   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:04.473713   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.476526   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.476861   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.476881   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.477065   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.477235   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477353   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477466   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.477631   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.477821   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.477837   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:27:04.708532   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:27:04.708562   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:27:04.708571   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetURL
	I1028 17:27:04.709704   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using libvirt version 6000000
	I1028 17:27:04.711553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.711850   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.711877   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.712051   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:27:04.712065   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:27:04.712074   32020 client.go:171] duration metric: took 27.620592933s to LocalClient.Create
	I1028 17:27:04.712101   32020 start.go:167] duration metric: took 27.620663816s to libmachine.API.Create "ha-381619"
	I1028 17:27:04.712114   32020 start.go:293] postStartSetup for "ha-381619-m03" (driver="kvm2")
	I1028 17:27:04.712128   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:27:04.712149   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.712379   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:27:04.712408   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.714536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.714835   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.714862   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.715020   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.715209   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.715341   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.715464   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.799357   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:27:04.803701   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:27:04.803723   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:27:04.803779   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:27:04.803846   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:27:04.803856   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:27:04.803932   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:27:04.813520   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:04.836571   32020 start.go:296] duration metric: took 124.443928ms for postStartSetup
	I1028 17:27:04.836615   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:04.837172   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.839735   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840084   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.840105   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840305   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:27:04.840512   32020 start.go:128] duration metric: took 27.767033157s to createHost
	I1028 17:27:04.840535   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.842741   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.843096   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843188   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.843354   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843499   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843648   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.843814   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.843957   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.843967   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:27:04.948925   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136424.929789330
	
	I1028 17:27:04.948945   32020 fix.go:216] guest clock: 1730136424.929789330
	I1028 17:27:04.948951   32020 fix.go:229] Guest: 2024-10-28 17:27:04.92978933 +0000 UTC Remote: 2024-10-28 17:27:04.840524406 +0000 UTC m=+152.171492636 (delta=89.264924ms)
	I1028 17:27:04.948966   32020 fix.go:200] guest clock delta is within tolerance: 89.264924ms
	I1028 17:27:04.948971   32020 start.go:83] releasing machines lock for "ha-381619-m03", held for 27.875595959s
	I1028 17:27:04.948986   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.949230   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.952087   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.952552   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.952580   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.954772   32020 out.go:177] * Found network options:
	I1028 17:27:04.956124   32020 out.go:177]   - NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:04.957329   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957826   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957978   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.958075   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:27:04.958124   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.958183   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:27:04.958205   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.960811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961141   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961168   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961186   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961307   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961462   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.961599   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.961617   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961637   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961711   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.961806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961908   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.962057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.962208   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:05.194026   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:27:05.201042   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:27:05.201105   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:27:05.217646   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:27:05.217662   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:27:05.217711   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:27:05.236089   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:27:05.251712   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:27:05.251757   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:27:05.266922   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:27:05.282192   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:27:05.400766   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:27:05.540458   32020 docker.go:233] disabling docker service ...
	I1028 17:27:05.540536   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:27:05.554384   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:27:05.566632   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:27:05.704365   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:27:05.814298   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:27:05.832161   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:27:05.850391   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:27:05.850440   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.860158   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:27:05.860214   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.870182   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.880040   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.890188   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:27:05.901036   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.911295   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.928814   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.939099   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:27:05.949052   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:27:05.949107   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:27:05.961188   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:27:05.970308   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:06.082126   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:27:06.186312   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:27:06.186399   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:27:06.191449   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:27:06.191503   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:27:06.195251   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:27:06.231675   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:27:06.231743   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.263999   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.295360   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:27:06.296610   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:27:06.297916   32020 out.go:177]   - env NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:06.299066   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:06.302357   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.302805   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:06.302853   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.303125   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:27:06.307684   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:06.322487   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:27:06.322674   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:06.322921   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.322954   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.337329   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I1028 17:27:06.337793   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.338350   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.338369   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.338643   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.338806   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:27:06.340173   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:06.340491   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.340528   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.354028   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I1028 17:27:06.354441   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.354853   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.354871   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.355207   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.355398   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:06.355555   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.17
	I1028 17:27:06.355568   32020 certs.go:194] generating shared ca certs ...
	I1028 17:27:06.355587   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.355706   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:27:06.355743   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:27:06.355752   32020 certs.go:256] generating profile certs ...
	I1028 17:27:06.355818   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:27:06.355840   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131
	I1028 17:27:06.355854   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.17 192.168.39.254]
	I1028 17:27:06.615352   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 ...
	I1028 17:27:06.615384   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131: {Name:mk30b1e5a01615c193463ae31058813eb757a15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615571   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 ...
	I1028 17:27:06.615587   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131: {Name:mkc1142cb1e41a27aeb0597e6f743604179f8b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615684   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:27:06.615844   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:27:06.616012   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:27:06.616031   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:27:06.616048   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:27:06.616067   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:27:06.616091   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:27:06.616107   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:27:06.616121   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:27:06.616138   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:27:06.632549   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:27:06.632628   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:27:06.632669   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:27:06.632680   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:27:06.632702   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:27:06.632732   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:27:06.632764   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:27:06.632808   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:06.632854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:27:06.632879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:06.632897   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:27:06.632955   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:06.635620   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.635992   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:06.636039   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.636203   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:06.636373   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:06.636547   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:06.636691   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:06.708743   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:27:06.714395   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:27:06.725274   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:27:06.729452   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:27:06.739682   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:27:06.743778   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:27:06.753533   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:27:06.757406   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:27:06.768515   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:27:06.772684   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:27:06.783594   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:27:06.788182   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:27:06.798917   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:27:06.824680   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:27:06.848168   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:27:06.870934   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:27:06.894622   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 17:27:06.916995   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:27:06.939854   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:27:06.962079   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:27:06.985176   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:27:07.007959   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:27:07.031196   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:27:07.054116   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:27:07.071809   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:27:07.087821   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:27:07.105114   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:27:07.121456   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:27:07.137929   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:27:07.153936   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:27:07.169928   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:27:07.176125   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:27:07.186611   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191749   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191791   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.197474   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:27:07.208145   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:27:07.219642   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224041   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224081   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.229665   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:27:07.240477   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:27:07.251279   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255404   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255446   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.260823   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:27:07.271234   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:27:07.275094   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:27:07.275142   32020 kubeadm.go:934] updating node {m03 192.168.39.17 8443 v1.31.2 crio true true} ...
	I1028 17:27:07.275277   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:27:07.275318   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:27:07.275356   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:27:07.290975   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:27:07.291032   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:27:07.291070   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.301885   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:27:07.301930   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.312754   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 17:27:07.312779   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312836   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:27:07.312864   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 17:27:07.312926   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312927   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:07.317184   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:27:07.317211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:27:07.352999   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:27:07.353042   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:27:07.353044   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.353130   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.410351   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:27:07.410406   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:27:08.136367   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:27:08.145689   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 17:27:08.162514   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:27:08.178802   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:27:08.195002   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:27:08.198953   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:08.210803   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:08.352163   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:08.377094   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:08.377585   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:08.377645   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:08.394262   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I1028 17:27:08.394687   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:08.395242   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:08.395276   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:08.395635   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:08.395837   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:08.396078   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:27:08.396215   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:27:08.396230   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:08.399082   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399537   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:08.399566   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399713   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:08.399904   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:08.400043   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:08.400171   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:08.552541   32020 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:08.552592   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I1028 17:27:30.870343   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (22.317699091s)
	I1028 17:27:30.870408   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:27:31.352565   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m03 minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:27:31.535264   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:27:31.653788   32020 start.go:319] duration metric: took 23.257712014s to joinCluster
	I1028 17:27:31.653906   32020 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:31.654293   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:31.655305   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:27:31.656854   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:31.931462   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:32.007668   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:27:32.008012   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:27:32.008099   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:27:32.008418   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:32.008555   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.008568   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.008580   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.008590   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.012013   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:32.509493   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.509514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.509522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.509526   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.512995   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:33.008792   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.008813   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.008823   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.008831   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.013277   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:33.509021   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.509043   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.509053   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.509059   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.512568   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.009494   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.009514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.009522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.009525   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.012872   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.013477   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:34.508671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.508698   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.508711   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.508717   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.511657   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.009518   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.009538   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.009546   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.009549   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.012353   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.509512   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.509539   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.509551   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.509564   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.513144   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.009477   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.009496   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.009503   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.009508   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.012424   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:36.509250   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.509279   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.509290   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.509295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.512794   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.513405   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:37.008636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.008657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.008668   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.008676   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.011455   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:37.509093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.509123   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.509127   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.512558   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.009185   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.009214   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.009222   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.009226   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.012314   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.508924   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.508943   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.508951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.508955   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.511947   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.008656   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.008679   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.008691   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.008698   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.011261   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.011779   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:39.509251   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.509272   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.509279   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.509283   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.512371   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:40.009266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.009299   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.013354   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:40.509289   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.509307   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.509315   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.509320   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.512591   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:41.009123   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.009146   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.009163   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.014310   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:41.014943   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:41.509077   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.509126   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.509134   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.512425   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.008587   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.008609   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.008621   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.008627   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.012270   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.509586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.509607   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.509615   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.509621   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.512638   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:43.009220   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.009238   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.009248   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.009256   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.012180   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.508622   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.508646   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.508656   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.508660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.511470   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.512019   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:44.009130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.009150   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.009161   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.012525   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:44.509423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.509446   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.509457   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.509462   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.513302   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.009198   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.009218   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.009225   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.009230   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.012566   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.508621   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.508641   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.508649   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.508652   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.511562   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:45.512081   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:46.008747   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.008770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.008778   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.008782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.011847   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:46.509246   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.509269   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.509277   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.509281   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.512939   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:47.008680   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.008703   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.008713   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.008719   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.030138   32020 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1028 17:27:47.508630   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.508650   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.508657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.508663   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.514479   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:47.515054   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:48.008911   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.008931   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.008940   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.008944   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.012001   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:48.509098   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.509121   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.509132   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.509138   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.512351   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.008615   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.008635   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.008643   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.008647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.011780   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.508700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.508723   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.508731   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.508735   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.511993   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.008627   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.008648   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.008657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.008660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.012285   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.012911   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:50.509280   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.509301   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.509309   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.509321   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.512855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.009269   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.009303   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.012097   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.509273   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.509293   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.509304   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.509309   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.512305   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.513072   32020 node_ready.go:49] node "ha-381619-m03" has status "Ready":"True"
	I1028 17:27:51.513099   32020 node_ready.go:38] duration metric: took 19.504662706s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:51.513110   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:51.513182   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:51.513193   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.513203   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.513209   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.518727   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.525983   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.526072   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:27:51.526088   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.526100   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.526111   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.531963   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.532739   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.532753   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.532761   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.532764   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.535083   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.535631   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.535649   32020 pod_ready.go:82] duration metric: took 9.646144ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535657   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:27:51.535707   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.535714   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.535721   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.538224   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.538964   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.538979   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.538986   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.538990   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.541964   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.542349   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.542364   32020 pod_ready.go:82] duration metric: took 6.701109ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542375   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542424   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:27:51.542434   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.542441   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.542447   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.544839   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.545361   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.545376   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.545385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.545392   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.547384   32020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 17:27:51.547876   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.547890   32020 pod_ready.go:82] duration metric: took 5.50604ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547898   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:27:51.547944   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.547951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.547954   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.549977   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.550423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:51.550435   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.550442   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.550445   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.552459   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.553082   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.553099   32020 pod_ready.go:82] duration metric: took 5.194272ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.553110   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.709397   32020 request.go:632] Waited for 156.217787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709446   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709451   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.709458   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.709461   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.712548   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.909629   32020 request.go:632] Waited for 196.367534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909689   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.909700   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.909708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.918132   32020 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 17:27:51.918809   32020 pod_ready.go:93] pod "etcd-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.918828   32020 pod_ready.go:82] duration metric: took 365.711465ms for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.918850   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.109303   32020 request.go:632] Waited for 190.370368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109365   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109373   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.109383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.109388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.112392   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.309408   32020 request.go:632] Waited for 196.27481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309460   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309464   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.309471   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.309475   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.312195   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.312752   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.312777   32020 pod_ready.go:82] duration metric: took 393.917667ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.312791   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.509760   32020 request.go:632] Waited for 196.900981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509849   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509861   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.509872   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.509878   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.513709   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.709720   32020 request.go:632] Waited for 195.19818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709771   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709777   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.709784   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.709789   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.712910   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.713496   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.713513   32020 pod_ready.go:82] duration metric: took 400.71419ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.713525   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.910080   32020 request.go:632] Waited for 196.490754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910131   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910138   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.910148   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.910155   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.913570   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.109611   32020 request.go:632] Waited for 195.067242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109675   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109680   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.109688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.109692   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.112419   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:53.113243   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.113258   32020 pod_ready.go:82] duration metric: took 399.726328ms for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.113269   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.309322   32020 request.go:632] Waited for 195.985489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309373   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309378   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.309385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.309389   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.312514   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.509641   32020 request.go:632] Waited for 196.355986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509756   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.509788   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.509809   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.513067   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.513631   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.513648   32020 pod_ready.go:82] duration metric: took 400.372385ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.513660   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.709756   32020 request.go:632] Waited for 196.030975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709821   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709829   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.709838   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.709847   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.713250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.910289   32020 request.go:632] Waited for 196.241506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910347   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910352   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.910360   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.910365   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.913501   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.914111   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.914128   32020 pod_ready.go:82] duration metric: took 400.460847ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.914138   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.110262   32020 request.go:632] Waited for 196.057341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110321   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110328   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.110338   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.110344   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.113686   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.309625   32020 request.go:632] Waited for 195.198525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309704   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.309715   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.309724   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.312970   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.313530   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.313550   32020 pod_ready.go:82] duration metric: took 399.405564ms for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.313561   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.509582   32020 request.go:632] Waited for 195.958227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509651   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.509664   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.509669   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.513356   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.709469   32020 request.go:632] Waited for 195.28008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709541   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709547   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.709555   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.709562   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.712778   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.713684   32020 pod_ready.go:93] pod "kube-proxy-2z74r" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.713706   32020 pod_ready.go:82] duration metric: took 400.138051ms for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.713722   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.909768   32020 request.go:632] Waited for 195.979649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909859   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909871   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.909882   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.909893   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.912982   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.110064   32020 request.go:632] Waited for 196.359608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110135   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.110142   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.110148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.113297   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.113778   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.113796   32020 pod_ready.go:82] duration metric: took 400.063804ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.113805   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.309960   32020 request.go:632] Waited for 196.087241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310011   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310017   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.310027   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.310040   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.313630   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.509848   32020 request.go:632] Waited for 195.356609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509902   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509907   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.509917   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.509922   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.513283   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.513872   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.513891   32020 pod_ready.go:82] duration metric: took 400.079859ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.513903   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.709489   32020 request.go:632] Waited for 195.521691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709543   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709558   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.709582   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.709589   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.713346   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.910316   32020 request.go:632] Waited for 196.337736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910371   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910375   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.910383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.910388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.913484   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.914099   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.914115   32020 pod_ready.go:82] duration metric: took 400.201992ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.914124   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.110258   32020 request.go:632] Waited for 196.039546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110326   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110331   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.110337   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.110342   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.113332   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:56.310263   32020 request.go:632] Waited for 196.319737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310334   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310355   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.310370   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.310379   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.313786   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.314505   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.314532   32020 pod_ready.go:82] duration metric: took 400.399291ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.314546   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.510327   32020 request.go:632] Waited for 195.699418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510378   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510383   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.510390   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.510394   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.513464   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.709328   32020 request.go:632] Waited for 195.274185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709385   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709391   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.709398   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.709403   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.712740   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.713420   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.713436   32020 pod_ready.go:82] duration metric: took 398.882403ms for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.713446   32020 pod_ready.go:39] duration metric: took 5.200325366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:56.713469   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:27:56.713519   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:27:56.729002   32020 api_server.go:72] duration metric: took 25.075050157s to wait for apiserver process to appear ...
	I1028 17:27:56.729025   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:27:56.729051   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:27:56.734141   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:27:56.734212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:27:56.734223   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.734234   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.734242   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.735154   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:27:56.735212   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:27:56.735228   32020 api_server.go:131] duration metric: took 6.196303ms to wait for apiserver health ...
	I1028 17:27:56.735237   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:27:56.909657   32020 request.go:632] Waited for 174.332812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909707   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909712   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.909720   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.909725   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.915545   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:56.922175   32020 system_pods.go:59] 24 kube-system pods found
	I1028 17:27:56.922215   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:56.922225   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:56.922230   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:56.922235   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:56.922240   32020 system_pods.go:61] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:56.922248   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:56.922253   32020 system_pods.go:61] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:56.922259   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:56.922267   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:56.922273   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:56.922281   32020 system_pods.go:61] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:56.922288   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:56.922294   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:56.922302   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:56.922308   32020 system_pods.go:61] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:56.922317   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:56.922327   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:56.922335   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:56.922341   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:56.922348   32020 system_pods.go:61] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:56.922352   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:56.922355   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:56.922361   32020 system_pods.go:61] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:56.922364   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:56.922369   32020 system_pods.go:74] duration metric: took 187.124012ms to wait for pod list to return data ...
	I1028 17:27:56.922378   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:27:57.109949   32020 request.go:632] Waited for 187.506133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110004   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110012   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.110022   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.110033   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.113502   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:57.113628   32020 default_sa.go:45] found service account: "default"
	I1028 17:27:57.113645   32020 default_sa.go:55] duration metric: took 191.260682ms for default service account to be created ...
	I1028 17:27:57.113656   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:27:57.309925   32020 request.go:632] Waited for 196.205305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310024   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310036   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.310047   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.310053   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.315888   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:57.322856   32020 system_pods.go:86] 24 kube-system pods found
	I1028 17:27:57.322880   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:57.322886   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:57.322890   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:57.322893   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:57.322897   32020 system_pods.go:89] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:57.322900   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:57.322904   32020 system_pods.go:89] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:57.322907   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:57.322918   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:57.322927   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:57.322932   32020 system_pods.go:89] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:57.322940   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:57.322946   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:57.322951   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:57.322958   32020 system_pods.go:89] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:57.322966   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:57.322971   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:57.322978   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:57.322986   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:57.322991   32020 system_pods.go:89] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:57.322999   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:57.323006   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:57.323011   32020 system_pods.go:89] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:57.323018   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:57.323027   32020 system_pods.go:126] duration metric: took 209.364489ms to wait for k8s-apps to be running ...
	I1028 17:27:57.323045   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:27:57.323123   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:57.338248   32020 system_svc.go:56] duration metric: took 15.198158ms WaitForService to wait for kubelet
	I1028 17:27:57.338268   32020 kubeadm.go:582] duration metric: took 25.684324158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:27:57.338294   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:27:57.509596   32020 request.go:632] Waited for 171.215252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509662   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509677   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.509688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.509699   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.514522   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:57.515701   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515733   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515769   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515779   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515785   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515800   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515810   32020 node_conditions.go:105] duration metric: took 177.507704ms to run NodePressure ...
	I1028 17:27:57.515829   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:27:57.515863   32020 start.go:255] writing updated cluster config ...
	I1028 17:27:57.516171   32020 ssh_runner.go:195] Run: rm -f paused
	I1028 17:27:57.567306   32020 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:27:57.569290   32020 out.go:177] * Done! kubectl is now configured to use "ha-381619" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.447488645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136716447468940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b1de87a-4be2-426a-b6b8-26d132dd0eac name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.448112444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcd56546-0e18-40bb-a0b9-a02a24f6460a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.448169464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcd56546-0e18-40bb-a0b9-a02a24f6460a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.448390363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcd56546-0e18-40bb-a0b9-a02a24f6460a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.493118760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d9c42a5-1b7e-42fd-9897-210e34cb98a2 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.493217020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d9c42a5-1b7e-42fd-9897-210e34cb98a2 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.494052876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=373feaab-7451-4c02-a99f-43543c818619 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.494439100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136716494417722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=373feaab-7451-4c02-a99f-43543c818619 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.494982117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8488b5b9-cd2d-4c29-a36b-b109c9e5ae61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.495053498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8488b5b9-cd2d-4c29-a36b-b109c9e5ae61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.495249084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8488b5b9-cd2d-4c29-a36b-b109c9e5ae61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.533691508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=741c72db-4f21-407b-b97d-756079f57eba name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.533781396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=741c72db-4f21-407b-b97d-756079f57eba name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.535010273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd18a63a-c5e6-4544-8e75-d96a94d692e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.535404576Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136716535373792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd18a63a-c5e6-4544-8e75-d96a94d692e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.536003865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5910c520-5828-418e-a249-f27dd866447f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.536071371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5910c520-5828-418e-a249-f27dd866447f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.536267642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5910c520-5828-418e-a249-f27dd866447f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.572779884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b44d058-f9d5-40fb-a772-5ee1b47fa510 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.572849385Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b44d058-f9d5-40fb-a772-5ee1b47fa510 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.574036151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70be9f3f-3780-4ecf-bb65-8751e8b356a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.574579407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136716574555238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70be9f3f-3780-4ecf-bb65-8751e8b356a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.575437180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fe2d7c4-7367-4b9b-b03b-6b92ddca8d35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.575548086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fe2d7c4-7367-4b9b-b03b-6b92ddca8d35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:31:56 ha-381619 crio[660]: time="2024-10-28 17:31:56.576255670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fe2d7c4-7367-4b9b-b03b-6b92ddca8d35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb3c00b93a7e6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   32dd7ef5c8db8       coredns-7c65d6cfc9-mtmvl
	439a12fd4f2e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   a8d9ef07a9de9       coredns-7c65d6cfc9-6lp7c
	32b25385ac6d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    6 minutes ago       Running             storage-provisioner       0                   cdf8a7008daaa       storage-provisioner
	02eaa5b848022       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                    6 minutes ago       Running             kindnet-cni               0                   ec93f4cb498de       kindnet-vj9vj
	4c2af4b0e8f70       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                    6 minutes ago       Running             kube-proxy                0                   31e8db8e13561       kube-proxy-mqdtj
	8820dc5a1a258       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215   6 minutes ago       Running             kube-vip                  0                   0440b64671662       kube-vip-ha-381619
	a2a4ad9e37b9c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                    6 minutes ago       Running             kube-apiserver            0                   8535275eaad56       kube-apiserver-ha-381619
	c4311ab52a438       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                    6 minutes ago       Running             kube-controller-manager   0                   75b5ea16f2e6b       kube-controller-manager-ha-381619
	5d299a6ffacac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                    6 minutes ago       Running             etcd                      0                   2d476f176dee3       etcd-ha-381619
	8f6c077dbde89       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                    6 minutes ago       Running             kube-scheduler            0                   2c5f11da0112e       kube-scheduler-ha-381619
	
	
	==> coredns [439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f] <==
	[INFO] 10.244.2.2:53226 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001368106s
	[INFO] 10.244.2.2:36312 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118066s
	[INFO] 10.244.1.2:38518 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000323292s
	[INFO] 10.244.1.2:47890 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000118239s
	[INFO] 10.244.1.2:45070 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000130482s
	[INFO] 10.244.1.2:39687 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001925125s
	[INFO] 10.244.2.3:53812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151587s
	[INFO] 10.244.2.3:54592 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180193s
	[INFO] 10.244.2.3:46470 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138925s
	[INFO] 10.244.2.2:48981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776352s
	[INFO] 10.244.2.2:35249 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131241s
	[INFO] 10.244.2.2:53917 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177037s
	[INFO] 10.244.2.2:34049 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001120542s
	[INFO] 10.244.1.2:35278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111663s
	[INFO] 10.244.1.2:37962 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106563s
	[INFO] 10.244.1.2:40545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246646s
	[INFO] 10.244.1.2:40814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215904s
	[INFO] 10.244.2.3:49806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000229773s
	[INFO] 10.244.2.2:44763 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117588s
	[INFO] 10.244.2.3:48756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125652s
	[INFO] 10.244.2.3:41328 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177165s
	[INFO] 10.244.2.3:35650 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137462s
	[INFO] 10.244.2.2:60478 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163829s
	[INFO] 10.244.2.2:51252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106643s
	[INFO] 10.244.1.2:56942 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137828s
	
	
	==> coredns [fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30] <==
	[INFO] 10.244.2.3:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131477s
	[INFO] 10.244.2.2:46692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196624s
	[INFO] 10.244.2.2:38402 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226272s
	[INFO] 10.244.2.2:34845 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153045s
	[INFO] 10.244.2.2:49870 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121016s
	[INFO] 10.244.1.2:51535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001893779s
	[INFO] 10.244.1.2:36412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109955s
	[INFO] 10.244.1.2:53434 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000734s
	[INFO] 10.244.1.2:38007 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101464s
	[INFO] 10.244.2.3:39546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.2.3:49299 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158392s
	[INFO] 10.244.2.3:42607 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102312s
	[INFO] 10.244.2.2:36855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150344s
	[INFO] 10.244.2.2:46374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00016867s
	[INFO] 10.244.2.2:37275 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112218s
	[INFO] 10.244.1.2:41523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017259s
	[INFO] 10.244.1.2:43696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000347465s
	[INFO] 10.244.1.2:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161099s
	[INFO] 10.244.1.2:59192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118287s
	[INFO] 10.244.2.3:42470 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243243s
	[INFO] 10.244.2.2:35932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020307s
	[INFO] 10.244.2.2:39597 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184178s
	[INFO] 10.244.1.2:43973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139891s
	[INFO] 10.244.1.2:41644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171411s
	[INFO] 10.244.1.2:47984 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086921s
	
	
	==> describe nodes <==
	Name:               ha-381619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:25:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-381619
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ff487634ba146ebb8929cc99763c422
	  System UUID:                1ff48763-4ba1-46eb-b892-9cc99763c422
	  Boot ID:                    ce5a7712-d088-475f-80ec-c8b7dee605bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6lp7c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m38s
	  kube-system                 coredns-7c65d6cfc9-mtmvl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m38s
	  kube-system                 etcd-ha-381619                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m43s
	  kube-system                 kindnet-vj9vj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m38s
	  kube-system                 kube-apiserver-ha-381619             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-controller-manager-ha-381619    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-proxy-mqdtj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-scheduler-ha-381619             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-vip-ha-381619                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m38s                  kube-proxy       
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m49s (x7 over 6m49s)  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m49s (x8 over 6m49s)  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m49s (x8 over 6m49s)  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m42s                  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s                  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s                  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m39s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  NodeReady                6m26s                  kubelet          Node ha-381619 status is now: NodeReady
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	
	
	Name:               ha-381619-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:26:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:29:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-381619-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe038bc140e34a24bfa4fe915bd6a83f
	  System UUID:                fe038bc1-40e3-4a24-bfa4-fe915bd6a83f
	  Boot ID:                    2395418c-cd94-4285-8c38-7cd31a1df92a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dxwnw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-381619-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m46s
	  kube-system                 kindnet-2ggdz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m46s
	  kube-system                 kube-apiserver-ha-381619-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-controller-manager-ha-381619-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-proxy-nrfgq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-scheduler-ha-381619-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-vip-ha-381619-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m43s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m47s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m47s)  kubelet          Node ha-381619-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m47s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m44s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeReady                5m24s                  kubelet          Node ha-381619-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-381619-m02 status is now: NodeNotReady
	
	
	Name:               ha-381619-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:27:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-381619-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f056208103704b70bfb827d2e01fcbd6
	  System UUID:                f0562081-0370-4b70-bfb8-27d2e01fcbd6
	  Boot ID:                    3c41c87b-23bb-455f-8665-1ca87b736f8b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-26cg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  default                     busybox-7dff88458-9n6bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-381619-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m26s
	  kube-system                 kindnet-82dqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m28s
	  kube-system                 kube-apiserver-ha-381619-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-controller-manager-ha-381619-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-2z74r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-scheduler-ha-381619-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-vip-ha-381619-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m24s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet          Node ha-381619-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x7 over 4m28s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	
	
	Name:               ha-381619-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_28_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:28:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-381619-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c794eda5b61f4b51846d119496d6611f
	  System UUID:                c794eda5-b61f-4b51-846d-119496d6611f
	  Boot ID:                    d054e196-c392-4e7e-a1b3-e459ee7974d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzqx2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-7dwhb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m15s (x2 over 3m15s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x2 over 3m15s)  kubelet          Node ha-381619-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x2 over 3m15s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  NodeReady                2m52s                  kubelet          Node ha-381619-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 17:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050172] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854623] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.491096] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.570925] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.341236] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.065909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059908] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.181734] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.112783] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.252616] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct28 17:25] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.759910] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.058388] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.418126] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.806365] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +4.131777] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.537990] kauditd_printk_skb: 41 callbacks suppressed
	[  +9.942403] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9] <==
	{"level":"warn","ts":"2024-10-28T17:31:56.619248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.676382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.838501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.844542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.851660Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.854575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.857133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.863114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.868665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.874742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.875333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.881366Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.884585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.892415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.898444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.904202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.908378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.911154Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.914405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.917316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.919094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.921973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.929394Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.936077Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:31:56.976050Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:31:57 up 7 min,  0 users,  load average: 0.06, 0.20, 0.12
	Linux ha-381619 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3] <==
	I1028 17:31:20.292249       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:30.295378       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:30.295542       1 main.go:300] handling current node
	I1028 17:31:30.295590       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:30.295611       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:30.296072       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:30.296113       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:30.296285       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:30.296308       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:40.295696       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:40.295776       1 main.go:300] handling current node
	I1028 17:31:40.295795       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:40.295804       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:40.296160       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:40.296192       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:40.296331       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:40.296358       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:50.300065       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:50.300101       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:50.300348       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:50.300359       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:50.300489       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:50.300496       1 main.go:300] handling current node
	I1028 17:31:50.300514       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:50.300518       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37] <==
	W1028 17:25:12.245785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I1028 17:25:12.247133       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 17:25:12.256065       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 17:25:12.326331       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 17:25:13.936309       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 17:25:13.952773       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 17:25:13.968009       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 17:25:17.830466       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1028 17:25:18.077531       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1028 17:28:07.019815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41404: use of closed network connection
	E1028 17:28:07.205390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41420: use of closed network connection
	E1028 17:28:07.386536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41448: use of closed network connection
	E1028 17:28:07.599536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41470: use of closed network connection
	E1028 17:28:07.775264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41490: use of closed network connection
	E1028 17:28:07.949242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41512: use of closed network connection
	E1028 17:28:08.118133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41522: use of closed network connection
	E1028 17:28:08.303400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41550: use of closed network connection
	E1028 17:28:08.475723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41556: use of closed network connection
	E1028 17:28:08.762057       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47594: use of closed network connection
	E1028 17:28:08.944378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47612: use of closed network connection
	E1028 17:28:09.126803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47636: use of closed network connection
	E1028 17:28:09.297149       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47658: use of closed network connection
	E1028 17:28:09.471140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47674: use of closed network connection
	E1028 17:28:09.647026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47704: use of closed network connection
	W1028 17:29:32.257515       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.230]
	
	
	==> kube-controller-manager [c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8] <==
	I1028 17:28:42.026011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.036622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.060198       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.297173       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-381619-m04"
	I1028 17:28:42.386481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.396569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.781672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.951532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.966339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:46.926084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:47.034432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:52.333791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:29:04.463505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:06.946376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:12.658007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:30:06.972035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:06.972340       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:30:06.993167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:07.005350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.940759ms"
	I1028 17:30:07.006727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.8µs"
	I1028 17:30:07.346197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:12.214622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:31.329575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619"
	
	
	==> kube-proxy [4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 17:25:18.698349       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 17:25:18.711046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E1028 17:25:18.711157       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:25:18.745433       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 17:25:18.745462       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 17:25:18.745490       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:25:18.747834       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:25:18.748160       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:25:18.748312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:25:18.749989       1 config.go:199] "Starting service config controller"
	I1028 17:25:18.750071       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:25:18.750117       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:25:18.750134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:25:18.750598       1 config.go:328] "Starting node config controller"
	I1028 17:25:18.751738       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:25:18.851210       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:25:18.851309       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:25:18.852898       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b] <==
	E1028 17:25:11.721217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.842707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:25:11.842776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.845287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:25:11.848083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.886433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:25:11.886602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1028 17:25:14.002937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:27:58.460072       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="568dfe45-5437-4cfd-8d20-2fa1e33d8999" pod="default/busybox-7dff88458-9n6bb" assumedNode="ha-381619-m03" currentNode="ha-381619-m02"
	E1028 17:27:58.471238       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m02"
	E1028 17:27:58.471407       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 568dfe45-5437-4cfd-8d20-2fa1e33d8999(default/busybox-7dff88458-9n6bb) was assumed on ha-381619-m02 but assigned to ha-381619-m03" pod="default/busybox-7dff88458-9n6bb"
	E1028 17:27:58.471445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" pod="default/busybox-7dff88458-9n6bb"
	I1028 17:27:58.471522       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m03"
	E1028 17:28:42.093317       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.093832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9291bc3b-2fa3-4a6c-99d3-7bb2a5721b25(kube-system/kindnet-fzqx2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fzqx2"
	E1028 17:28:42.094010       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-fzqx2"
	I1028 17:28:42.094225       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.149948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.154547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15a36ca9-85be-4b6a-8e4a-31495d13a0c1(kube-system/kube-proxy-7dwhb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7dwhb"
	E1028 17:28:42.156945       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" pod="kube-system/kube-proxy-7dwhb"
	I1028 17:28:42.157115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.164640       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	E1028 17:28:42.164715       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 61afb85d-818e-40a2-ad14-87c5f4541d0e(kube-system/kindnet-p6x26) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p6x26"
	E1028 17:28:42.164729       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-p6x26"
	I1028 17:28:42.164745       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	
	
	==> kubelet <==
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979164    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979443    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.980958    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.982957    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988254    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988294    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989574    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989617    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996610    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996710    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.872137    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997852    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997963    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:23.999904    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:24.000328    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001784    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001829    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003002    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003044    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:54 ha-381619 kubelet[1301]: E1028 17:31:54.004348    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136714004119051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:54 ha-381619 kubelet[1301]: E1028 17:31:54.004369    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136714004119051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-381619 -n ha-381619
helpers_test.go:261: (dbg) Run:  kubectl --context ha-381619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.092700682s)
ha_test.go:309: expected profile "ha-381619" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-381619\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-381619\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-381619\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.230\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.171\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.17\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.224\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\
"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-381619 -n ha-381619
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 logs -n 25: (1.369755429s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m03_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m04 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp testdata/cp-test.txt                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m04_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03:/home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m03 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-381619 node stop m02 -v=7                                                    | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-381619 node start m02 -v=7                                                   | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:31 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:24:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:24:32.704402   32020 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:24:32.704551   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704563   32020 out.go:358] Setting ErrFile to fd 2...
	I1028 17:24:32.704569   32020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:32.704718   32020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:24:32.705246   32020 out.go:352] Setting JSON to false
	I1028 17:24:32.706049   32020 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4016,"bootTime":1730132257,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:24:32.706140   32020 start.go:139] virtualization: kvm guest
	I1028 17:24:32.708076   32020 out.go:177] * [ha-381619] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:24:32.709709   32020 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:24:32.709708   32020 notify.go:220] Checking for updates...
	I1028 17:24:32.711979   32020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:24:32.713179   32020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:24:32.714308   32020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.715427   32020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:24:32.716562   32020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:24:32.717898   32020 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:24:32.750233   32020 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 17:24:32.751376   32020 start.go:297] selected driver: kvm2
	I1028 17:24:32.751386   32020 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:24:32.751396   32020 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:24:32.752108   32020 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.752174   32020 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:24:32.765779   32020 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:24:32.765818   32020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:24:32.766066   32020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:24:32.766095   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:24:32.766149   32020 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 17:24:32.766159   32020 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 17:24:32.766215   32020 start.go:340] cluster config:
	{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 17:24:32.766343   32020 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:24:32.768753   32020 out.go:177] * Starting "ha-381619" primary control-plane node in "ha-381619" cluster
	I1028 17:24:32.769947   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:32.769974   32020 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:24:32.769982   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:24:32.770049   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:24:32.770062   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:24:32.770342   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:32.770362   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json: {Name:mkd5c3a5f97562236390379745e09449a8badb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:24:32.770497   32020 start.go:360] acquireMachinesLock for ha-381619: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:24:32.770539   32020 start.go:364] duration metric: took 26.277µs to acquireMachinesLock for "ha-381619"
	I1028 17:24:32.770561   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:24:32.770606   32020 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 17:24:32.772872   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:24:32.772986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:32.773028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:32.786246   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I1028 17:24:32.786651   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:32.787204   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:24:32.787223   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:32.787585   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:32.787761   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:32.787890   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:32.788041   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:24:32.788072   32020 client.go:168] LocalClient.Create starting
	I1028 17:24:32.788105   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:24:32.788134   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788152   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788202   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:24:32.788220   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:24:32.788232   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:24:32.788246   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:24:32.788258   32020 main.go:141] libmachine: (ha-381619) Calling .PreCreateCheck
	I1028 17:24:32.788587   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:32.789017   32020 main.go:141] libmachine: Creating machine...
	I1028 17:24:32.789034   32020 main.go:141] libmachine: (ha-381619) Calling .Create
	I1028 17:24:32.789161   32020 main.go:141] libmachine: (ha-381619) Creating KVM machine...
	I1028 17:24:32.790254   32020 main.go:141] libmachine: (ha-381619) DBG | found existing default KVM network
	I1028 17:24:32.790889   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.790760   32043 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1028 17:24:32.790924   32020 main.go:141] libmachine: (ha-381619) DBG | created network xml: 
	I1028 17:24:32.790942   32020 main.go:141] libmachine: (ha-381619) DBG | <network>
	I1028 17:24:32.790953   32020 main.go:141] libmachine: (ha-381619) DBG |   <name>mk-ha-381619</name>
	I1028 17:24:32.790960   32020 main.go:141] libmachine: (ha-381619) DBG |   <dns enable='no'/>
	I1028 17:24:32.790971   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.790981   32020 main.go:141] libmachine: (ha-381619) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 17:24:32.791022   32020 main.go:141] libmachine: (ha-381619) DBG |     <dhcp>
	I1028 17:24:32.791042   32020 main.go:141] libmachine: (ha-381619) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 17:24:32.791053   32020 main.go:141] libmachine: (ha-381619) DBG |     </dhcp>
	I1028 17:24:32.791062   32020 main.go:141] libmachine: (ha-381619) DBG |   </ip>
	I1028 17:24:32.791070   32020 main.go:141] libmachine: (ha-381619) DBG |   
	I1028 17:24:32.791079   32020 main.go:141] libmachine: (ha-381619) DBG | </network>
	I1028 17:24:32.791092   32020 main.go:141] libmachine: (ha-381619) DBG | 
	I1028 17:24:32.795776   32020 main.go:141] libmachine: (ha-381619) DBG | trying to create private KVM network mk-ha-381619 192.168.39.0/24...
	I1028 17:24:32.856590   32020 main.go:141] libmachine: (ha-381619) DBG | private KVM network mk-ha-381619 192.168.39.0/24 created
	I1028 17:24:32.856623   32020 main.go:141] libmachine: (ha-381619) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:32.856641   32020 main.go:141] libmachine: (ha-381619) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:24:32.856686   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:32.856608   32043 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:32.856733   32020 main.go:141] libmachine: (ha-381619) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:24:33.109141   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.109021   32043 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa...
	I1028 17:24:33.382423   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382288   32043 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk...
	I1028 17:24:33.382457   32020 main.go:141] libmachine: (ha-381619) DBG | Writing magic tar header
	I1028 17:24:33.382473   32020 main.go:141] libmachine: (ha-381619) DBG | Writing SSH key tar header
	I1028 17:24:33.382487   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:33.382434   32043 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 ...
	I1028 17:24:33.382577   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619 (perms=drwx------)
	I1028 17:24:33.382600   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:24:33.382611   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619
	I1028 17:24:33.382624   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:24:33.382636   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:33.382651   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:24:33.382662   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:24:33.382673   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:24:33.382683   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:24:33.382696   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:24:33.382710   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:24:33.382720   32020 main.go:141] libmachine: (ha-381619) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:24:33.382733   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:33.382743   32020 main.go:141] libmachine: (ha-381619) DBG | Checking permissions on dir: /home
	I1028 17:24:33.382755   32020 main.go:141] libmachine: (ha-381619) DBG | Skipping /home - not owner
	I1028 17:24:33.383729   32020 main.go:141] libmachine: (ha-381619) define libvirt domain using xml: 
	I1028 17:24:33.383753   32020 main.go:141] libmachine: (ha-381619) <domain type='kvm'>
	I1028 17:24:33.383763   32020 main.go:141] libmachine: (ha-381619)   <name>ha-381619</name>
	I1028 17:24:33.383771   32020 main.go:141] libmachine: (ha-381619)   <memory unit='MiB'>2200</memory>
	I1028 17:24:33.383782   32020 main.go:141] libmachine: (ha-381619)   <vcpu>2</vcpu>
	I1028 17:24:33.383791   32020 main.go:141] libmachine: (ha-381619)   <features>
	I1028 17:24:33.383800   32020 main.go:141] libmachine: (ha-381619)     <acpi/>
	I1028 17:24:33.383823   32020 main.go:141] libmachine: (ha-381619)     <apic/>
	I1028 17:24:33.383834   32020 main.go:141] libmachine: (ha-381619)     <pae/>
	I1028 17:24:33.383847   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.383857   32020 main.go:141] libmachine: (ha-381619)   </features>
	I1028 17:24:33.383868   32020 main.go:141] libmachine: (ha-381619)   <cpu mode='host-passthrough'>
	I1028 17:24:33.383876   32020 main.go:141] libmachine: (ha-381619)   
	I1028 17:24:33.383886   32020 main.go:141] libmachine: (ha-381619)   </cpu>
	I1028 17:24:33.383894   32020 main.go:141] libmachine: (ha-381619)   <os>
	I1028 17:24:33.383901   32020 main.go:141] libmachine: (ha-381619)     <type>hvm</type>
	I1028 17:24:33.383912   32020 main.go:141] libmachine: (ha-381619)     <boot dev='cdrom'/>
	I1028 17:24:33.383921   32020 main.go:141] libmachine: (ha-381619)     <boot dev='hd'/>
	I1028 17:24:33.383934   32020 main.go:141] libmachine: (ha-381619)     <bootmenu enable='no'/>
	I1028 17:24:33.383944   32020 main.go:141] libmachine: (ha-381619)   </os>
	I1028 17:24:33.383952   32020 main.go:141] libmachine: (ha-381619)   <devices>
	I1028 17:24:33.383961   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='cdrom'>
	I1028 17:24:33.383974   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/boot2docker.iso'/>
	I1028 17:24:33.383984   32020 main.go:141] libmachine: (ha-381619)       <target dev='hdc' bus='scsi'/>
	I1028 17:24:33.383994   32020 main.go:141] libmachine: (ha-381619)       <readonly/>
	I1028 17:24:33.384049   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384071   32020 main.go:141] libmachine: (ha-381619)     <disk type='file' device='disk'>
	I1028 17:24:33.384079   32020 main.go:141] libmachine: (ha-381619)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:24:33.384087   32020 main.go:141] libmachine: (ha-381619)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/ha-381619.rawdisk'/>
	I1028 17:24:33.384092   32020 main.go:141] libmachine: (ha-381619)       <target dev='hda' bus='virtio'/>
	I1028 17:24:33.384099   32020 main.go:141] libmachine: (ha-381619)     </disk>
	I1028 17:24:33.384104   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384111   32020 main.go:141] libmachine: (ha-381619)       <source network='mk-ha-381619'/>
	I1028 17:24:33.384116   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384122   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384127   32020 main.go:141] libmachine: (ha-381619)     <interface type='network'>
	I1028 17:24:33.384134   32020 main.go:141] libmachine: (ha-381619)       <source network='default'/>
	I1028 17:24:33.384140   32020 main.go:141] libmachine: (ha-381619)       <model type='virtio'/>
	I1028 17:24:33.384146   32020 main.go:141] libmachine: (ha-381619)     </interface>
	I1028 17:24:33.384151   32020 main.go:141] libmachine: (ha-381619)     <serial type='pty'>
	I1028 17:24:33.384157   32020 main.go:141] libmachine: (ha-381619)       <target port='0'/>
	I1028 17:24:33.384180   32020 main.go:141] libmachine: (ha-381619)     </serial>
	I1028 17:24:33.384203   32020 main.go:141] libmachine: (ha-381619)     <console type='pty'>
	I1028 17:24:33.384217   32020 main.go:141] libmachine: (ha-381619)       <target type='serial' port='0'/>
	I1028 17:24:33.384235   32020 main.go:141] libmachine: (ha-381619)     </console>
	I1028 17:24:33.384247   32020 main.go:141] libmachine: (ha-381619)     <rng model='virtio'>
	I1028 17:24:33.384258   32020 main.go:141] libmachine: (ha-381619)       <backend model='random'>/dev/random</backend>
	I1028 17:24:33.384267   32020 main.go:141] libmachine: (ha-381619)     </rng>
	I1028 17:24:33.384291   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384303   32020 main.go:141] libmachine: (ha-381619)     
	I1028 17:24:33.384320   32020 main.go:141] libmachine: (ha-381619)   </devices>
	I1028 17:24:33.384331   32020 main.go:141] libmachine: (ha-381619) </domain>
	I1028 17:24:33.384339   32020 main.go:141] libmachine: (ha-381619) 
	I1028 17:24:33.388368   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:d7:31:89 in network default
	I1028 17:24:33.388983   32020 main.go:141] libmachine: (ha-381619) Ensuring networks are active...
	I1028 17:24:33.389001   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:33.389577   32020 main.go:141] libmachine: (ha-381619) Ensuring network default is active
	I1028 17:24:33.389893   32020 main.go:141] libmachine: (ha-381619) Ensuring network mk-ha-381619 is active
	I1028 17:24:33.390366   32020 main.go:141] libmachine: (ha-381619) Getting domain xml...
	I1028 17:24:33.390966   32020 main.go:141] libmachine: (ha-381619) Creating domain...
	I1028 17:24:34.558865   32020 main.go:141] libmachine: (ha-381619) Waiting to get IP...
	I1028 17:24:34.559610   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.559962   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.559982   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.559945   32043 retry.go:31] will retry after 257.179075ms: waiting for machine to come up
	I1028 17:24:34.818320   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:34.818636   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:34.818664   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:34.818591   32043 retry.go:31] will retry after 336.999416ms: waiting for machine to come up
	I1028 17:24:35.156955   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.157385   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.157410   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.157352   32043 retry.go:31] will retry after 376.336351ms: waiting for machine to come up
	I1028 17:24:35.534739   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.535148   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.535176   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.535109   32043 retry.go:31] will retry after 414.103212ms: waiting for machine to come up
	I1028 17:24:35.950512   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:35.950871   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:35.950902   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:35.950833   32043 retry.go:31] will retry after 701.752446ms: waiting for machine to come up
	I1028 17:24:36.653573   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:36.653919   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:36.653945   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:36.653879   32043 retry.go:31] will retry after 793.432647ms: waiting for machine to come up
	I1028 17:24:37.448827   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:37.449212   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:37.449233   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:37.449175   32043 retry.go:31] will retry after 894.965011ms: waiting for machine to come up
	I1028 17:24:38.345655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:38.346083   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:38.346104   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:38.346040   32043 retry.go:31] will retry after 955.035568ms: waiting for machine to come up
	I1028 17:24:39.303112   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:39.303513   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:39.303566   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:39.303470   32043 retry.go:31] will retry after 1.649236041s: waiting for machine to come up
	I1028 17:24:40.955622   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:40.956156   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:40.956183   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:40.956118   32043 retry.go:31] will retry after 1.776451571s: waiting for machine to come up
	I1028 17:24:42.733883   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:42.734354   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:42.734378   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:42.734330   32043 retry.go:31] will retry after 2.290450392s: waiting for machine to come up
	I1028 17:24:45.027299   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:45.027697   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:45.027727   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:45.027647   32043 retry.go:31] will retry after 3.000171726s: waiting for machine to come up
	I1028 17:24:48.029293   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:48.029625   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:48.029642   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:48.029599   32043 retry.go:31] will retry after 3.464287385s: waiting for machine to come up
	I1028 17:24:51.498145   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:51.498494   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find current IP address of domain ha-381619 in network mk-ha-381619
	I1028 17:24:51.498520   32020 main.go:141] libmachine: (ha-381619) DBG | I1028 17:24:51.498450   32043 retry.go:31] will retry after 4.798676944s: waiting for machine to come up
	I1028 17:24:56.301062   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301461   32020 main.go:141] libmachine: (ha-381619) Found IP for machine: 192.168.39.230
	I1028 17:24:56.301476   32020 main.go:141] libmachine: (ha-381619) Reserving static IP address...
	I1028 17:24:56.301485   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has current primary IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.301800   32020 main.go:141] libmachine: (ha-381619) DBG | unable to find host DHCP lease matching {name: "ha-381619", mac: "52:54:00:bf:e3:f2", ip: "192.168.39.230"} in network mk-ha-381619
	I1028 17:24:56.367996   32020 main.go:141] libmachine: (ha-381619) Reserved static IP address: 192.168.39.230
	I1028 17:24:56.368025   32020 main.go:141] libmachine: (ha-381619) Waiting for SSH to be available...
	I1028 17:24:56.368033   32020 main.go:141] libmachine: (ha-381619) DBG | Getting to WaitForSSH function...
	I1028 17:24:56.370488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.370848   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.370872   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.371022   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH client type: external
	I1028 17:24:56.371056   32020 main.go:141] libmachine: (ha-381619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa (-rw-------)
	I1028 17:24:56.371091   32020 main.go:141] libmachine: (ha-381619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:24:56.371104   32020 main.go:141] libmachine: (ha-381619) DBG | About to run SSH command:
	I1028 17:24:56.371114   32020 main.go:141] libmachine: (ha-381619) DBG | exit 0
	I1028 17:24:56.492195   32020 main.go:141] libmachine: (ha-381619) DBG | SSH cmd err, output: <nil>: 
	I1028 17:24:56.492449   32020 main.go:141] libmachine: (ha-381619) KVM machine creation complete!
	I1028 17:24:56.492777   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:56.493326   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493514   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:56.493649   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:24:56.493664   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:24:56.494850   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:24:56.494862   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:24:56.494867   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:24:56.494872   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.496787   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497152   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.497174   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.497302   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.497464   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497595   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.497725   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.497885   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.498064   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.498078   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:24:56.595488   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.595509   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:24:56.595519   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.597859   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598187   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.598209   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.598403   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.598582   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.598880   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.599036   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.599254   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.599265   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:24:56.696771   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:24:56.696858   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:24:56.696872   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:24:56.696881   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697109   32020 buildroot.go:166] provisioning hostname "ha-381619"
	I1028 17:24:56.697130   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.697282   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.699770   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700115   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.700139   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.700271   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.700441   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700571   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.700701   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.700825   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.701013   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.701029   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619 && echo "ha-381619" | sudo tee /etc/hostname
	I1028 17:24:56.814628   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:24:56.814655   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.817104   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817470   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.817491   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.817657   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:56.817827   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.817992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:56.818124   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:56.818278   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:56.818455   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:56.818475   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:24:56.926794   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:24:56.926821   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:24:56.926841   32020 buildroot.go:174] setting up certificates
	I1028 17:24:56.926853   32020 provision.go:84] configureAuth start
	I1028 17:24:56.926865   32020 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:24:56.927086   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:56.929479   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929816   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.929835   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.929984   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:56.931934   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932225   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:56.932249   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:56.932384   32020 provision.go:143] copyHostCerts
	I1028 17:24:56.932411   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932452   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:24:56.932465   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:24:56.932554   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:24:56.932658   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932682   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:24:56.932692   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:24:56.932731   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:24:56.932840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932873   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:24:56.932883   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:24:56.932921   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:24:56.933013   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619 san=[127.0.0.1 192.168.39.230 ha-381619 localhost minikube]
	I1028 17:24:57.000217   32020 provision.go:177] copyRemoteCerts
	I1028 17:24:57.000264   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:24:57.000288   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.002585   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.002859   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.002887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.003010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.003192   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.003327   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.003456   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.082327   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:24:57.082386   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:24:57.108992   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:24:57.109040   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 17:24:57.131168   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:24:57.131225   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:24:57.153241   32020 provision.go:87] duration metric: took 226.378501ms to configureAuth
	I1028 17:24:57.153264   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:24:57.153419   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:24:57.153491   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.155887   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156229   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.156255   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.156416   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.156589   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156751   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.156909   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.157032   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.157170   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.157183   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:24:57.371091   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:24:57.371116   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:24:57.371138   32020 main.go:141] libmachine: (ha-381619) Calling .GetURL
	I1028 17:24:57.372265   32020 main.go:141] libmachine: (ha-381619) DBG | Using libvirt version 6000000
	I1028 17:24:57.374388   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374694   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.374715   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.374887   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:24:57.374900   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:24:57.374907   32020 client.go:171] duration metric: took 24.586826396s to LocalClient.Create
	I1028 17:24:57.374929   32020 start.go:167] duration metric: took 24.586887382s to libmachine.API.Create "ha-381619"
	I1028 17:24:57.374942   32020 start.go:293] postStartSetup for "ha-381619" (driver="kvm2")
	I1028 17:24:57.374957   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:24:57.374978   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.375196   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:24:57.375226   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.377231   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377544   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.377561   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.377690   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.377841   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.378010   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.378127   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.458768   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:24:57.463205   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:24:57.463222   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:24:57.463283   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:24:57.463370   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:24:57.463382   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:24:57.463492   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:24:57.473092   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:24:57.499838   32020 start.go:296] duration metric: took 124.881379ms for postStartSetup
	I1028 17:24:57.499880   32020 main.go:141] libmachine: (ha-381619) Calling .GetConfigRaw
	I1028 17:24:57.500412   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.502520   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.502817   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.502846   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.503009   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:24:57.503210   32020 start.go:128] duration metric: took 24.732586487s to createHost
	I1028 17:24:57.503234   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.505276   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505578   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.505602   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.505703   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.505855   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.505992   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.506115   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.506245   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:24:57.506406   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:24:57.506418   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:24:57.608878   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136297.586420313
	
	I1028 17:24:57.608900   32020 fix.go:216] guest clock: 1730136297.586420313
	I1028 17:24:57.608919   32020 fix.go:229] Guest: 2024-10-28 17:24:57.586420313 +0000 UTC Remote: 2024-10-28 17:24:57.503223131 +0000 UTC m=+24.834191366 (delta=83.197182ms)
	I1028 17:24:57.608956   32020 fix.go:200] guest clock delta is within tolerance: 83.197182ms
	I1028 17:24:57.608963   32020 start.go:83] releasing machines lock for "ha-381619", held for 24.838412899s
	I1028 17:24:57.608987   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.609175   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:57.611488   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611798   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.611830   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.611946   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612411   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612586   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:24:57.612684   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:24:57.612719   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.612770   32020 ssh_runner.go:195] Run: cat /version.json
	I1028 17:24:57.612787   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:24:57.615260   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615428   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615614   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615648   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615673   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:57.615698   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:57.615759   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615940   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:24:57.615944   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616121   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:24:57.616269   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:24:57.616272   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.616376   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:24:57.711561   32020 ssh_runner.go:195] Run: systemctl --version
	I1028 17:24:57.717385   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:24:57.881204   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:24:57.887117   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:24:57.887178   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:24:57.902953   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:24:57.902971   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:24:57.903029   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:24:57.919599   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:24:57.932865   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:24:57.932911   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:24:57.945714   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:24:57.958712   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:24:58.074716   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:24:58.228971   32020 docker.go:233] disabling docker service ...
	I1028 17:24:58.229043   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:24:58.242560   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:24:58.255313   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:24:58.370441   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:24:58.483893   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:24:58.497247   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:24:58.514703   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:24:58.514757   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.524413   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:24:58.524490   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.534125   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.543414   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.553077   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:24:58.562606   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.572154   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.588419   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:24:58.597992   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:24:58.606565   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:24:58.606613   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:24:58.618268   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:24:58.627230   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:24:58.734287   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:24:58.826354   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:24:58.826428   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:24:58.830997   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:24:58.831057   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:24:58.834579   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:24:58.876875   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:24:58.876953   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.903643   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:24:58.932572   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:24:58.933808   32020 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:24:58.935970   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936230   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:24:58.936257   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:24:58.936509   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:24:58.940296   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:24:58.952574   32020 kubeadm.go:883] updating cluster {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:24:58.952676   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:24:58.952732   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:24:58.984654   32020 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 17:24:58.984732   32020 ssh_runner.go:195] Run: which lz4
	I1028 17:24:58.988394   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 17:24:58.988478   32020 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 17:24:58.992506   32020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 17:24:58.992533   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 17:25:00.255551   32020 crio.go:462] duration metric: took 1.267100193s to copy over tarball
	I1028 17:25:00.255628   32020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 17:25:02.245448   32020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.989785325s)
	I1028 17:25:02.245479   32020 crio.go:469] duration metric: took 1.989902074s to extract the tarball
	I1028 17:25:02.245485   32020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 17:25:02.282635   32020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:25:02.327962   32020 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:25:02.327983   32020 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:25:02.327990   32020 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.2 crio true true} ...
	I1028 17:25:02.328079   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:02.328139   32020 ssh_runner.go:195] Run: crio config
	I1028 17:25:02.370696   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:02.370725   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:02.370738   32020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:25:02.370766   32020 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-381619 NodeName:ha-381619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:25:02.370888   32020 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-381619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:25:02.370908   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:02.370947   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:02.386589   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:02.386701   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:02.386768   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:02.396553   32020 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:25:02.396617   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 17:25:02.405738   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 17:25:02.421400   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:02.437117   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 17:25:02.452375   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 17:25:02.467922   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:02.471573   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:02.483093   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:02.609045   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:02.625565   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.230
	I1028 17:25:02.625588   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:02.625605   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.625774   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:02.625839   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:02.625856   32020 certs.go:256] generating profile certs ...
	I1028 17:25:02.625920   32020 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:02.625937   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt with IP's: []
	I1028 17:25:02.808278   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt ...
	I1028 17:25:02.808301   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt: {Name:mkc46e4b9b851301d42b46f45c8b044b11edfb36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808454   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key ...
	I1028 17:25:02.808464   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key: {Name:mkd681d3c01379608131f30441747317e91c7a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:02.808570   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb
	I1028 17:25:02.808586   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.254]
	I1028 17:25:03.000249   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb ...
	I1028 17:25:03.000276   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb: {Name:mka7f7f8394389959cb184a46e51c1572954cddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000436   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb ...
	I1028 17:25:03.000449   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb: {Name:mk9ae1b9eef85a6c1bbc7739c982c84bfb111d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.000555   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:03.000643   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.884e45fb -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:03.000695   32020 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:03.000710   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt with IP's: []
	I1028 17:25:03.126776   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt ...
	I1028 17:25:03.126802   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt: {Name:mk682452f5be7b32ad3e949275f7af954945db7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.126938   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key ...
	I1028 17:25:03.126948   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key: {Name:mk5feeb9713d67bfc630ef82b40280ce400bc4ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:03.127009   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:03.127027   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:03.127041   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:03.127053   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:03.127070   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:03.127083   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:03.127094   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:03.127106   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:03.127161   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:03.127194   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:03.127204   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:03.127228   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:03.127253   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:03.127274   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:03.127311   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:03.127335   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.127348   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.127360   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.127858   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:03.153264   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:03.175704   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:03.198131   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:03.220379   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 17:25:03.243352   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:25:03.265623   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:03.287951   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:03.312260   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:03.336494   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:03.363576   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:03.401524   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:25:03.430796   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:03.437428   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:03.448106   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452501   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.452553   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:03.458194   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:03.468982   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:03.479358   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483520   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.483564   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:03.488936   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:03.499033   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:03.509212   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513380   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.513413   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:03.518680   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:03.528774   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:03.532547   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:03.532597   32020 kubeadm.go:392] StartCluster: {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:03.532684   32020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:25:03.532747   32020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:25:03.571597   32020 cri.go:89] found id: ""
	I1028 17:25:03.571655   32020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 17:25:03.581447   32020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 17:25:03.590775   32020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 17:25:03.599971   32020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 17:25:03.599983   32020 kubeadm.go:157] found existing configuration files:
	
	I1028 17:25:03.600011   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 17:25:03.608531   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 17:25:03.608565   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 17:25:03.617452   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 17:25:03.626079   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 17:25:03.626124   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 17:25:03.635124   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.644097   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 17:25:03.644143   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 17:25:03.653605   32020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 17:25:03.662453   32020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 17:25:03.662497   32020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 17:25:03.671488   32020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 17:25:03.865602   32020 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 17:25:14.531712   32020 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 17:25:14.531787   32020 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 17:25:14.531884   32020 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 17:25:14.532023   32020 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 17:25:14.532157   32020 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 17:25:14.532250   32020 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 17:25:14.533662   32020 out.go:235]   - Generating certificates and keys ...
	I1028 17:25:14.533743   32020 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 17:25:14.533841   32020 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 17:25:14.533931   32020 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 17:25:14.534016   32020 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 17:25:14.534080   32020 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 17:25:14.534133   32020 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 17:25:14.534179   32020 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 17:25:14.534283   32020 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534363   32020 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 17:25:14.534530   32020 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-381619 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I1028 17:25:14.534620   32020 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 17:25:14.534728   32020 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 17:25:14.534800   32020 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 17:25:14.534868   32020 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 17:25:14.534934   32020 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 17:25:14.535013   32020 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 17:25:14.535092   32020 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 17:25:14.535200   32020 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 17:25:14.535281   32020 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 17:25:14.535399   32020 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 17:25:14.535478   32020 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 17:25:14.537017   32020 out.go:235]   - Booting up control plane ...
	I1028 17:25:14.537115   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 17:25:14.537184   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 17:25:14.537257   32020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 17:25:14.537408   32020 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 17:25:14.537527   32020 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 17:25:14.537591   32020 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 17:25:14.537728   32020 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 17:25:14.537862   32020 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 17:25:14.537919   32020 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001240837s
	I1028 17:25:14.537979   32020 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 17:25:14.538029   32020 kubeadm.go:310] [api-check] The API server is healthy after 5.745465318s
	I1028 17:25:14.538126   32020 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 17:25:14.538233   32020 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 17:25:14.538314   32020 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 17:25:14.538487   32020 kubeadm.go:310] [mark-control-plane] Marking the node ha-381619 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 17:25:14.538537   32020 kubeadm.go:310] [bootstrap-token] Using token: z48g6f.v3e9buj5ot2drke2
	I1028 17:25:14.539818   32020 out.go:235]   - Configuring RBAC rules ...
	I1028 17:25:14.539934   32020 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 17:25:14.540010   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 17:25:14.540140   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 17:25:14.540310   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 17:25:14.540484   32020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 17:25:14.540575   32020 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 17:25:14.540725   32020 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 17:25:14.540796   32020 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 17:25:14.540853   32020 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 17:25:14.540862   32020 kubeadm.go:310] 
	I1028 17:25:14.540934   32020 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 17:25:14.540941   32020 kubeadm.go:310] 
	I1028 17:25:14.541053   32020 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 17:25:14.541063   32020 kubeadm.go:310] 
	I1028 17:25:14.541098   32020 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 17:25:14.541149   32020 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 17:25:14.541207   32020 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 17:25:14.541220   32020 kubeadm.go:310] 
	I1028 17:25:14.541267   32020 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 17:25:14.541273   32020 kubeadm.go:310] 
	I1028 17:25:14.541311   32020 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 17:25:14.541317   32020 kubeadm.go:310] 
	I1028 17:25:14.541391   32020 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 17:25:14.541462   32020 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 17:25:14.541520   32020 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 17:25:14.541526   32020 kubeadm.go:310] 
	I1028 17:25:14.541594   32020 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 17:25:14.541676   32020 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 17:25:14.541684   32020 kubeadm.go:310] 
	I1028 17:25:14.541772   32020 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.541903   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 17:25:14.541939   32020 kubeadm.go:310] 	--control-plane 
	I1028 17:25:14.541952   32020 kubeadm.go:310] 
	I1028 17:25:14.542037   32020 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 17:25:14.542044   32020 kubeadm.go:310] 
	I1028 17:25:14.542111   32020 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z48g6f.v3e9buj5ot2drke2 \
	I1028 17:25:14.542209   32020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 17:25:14.542219   32020 cni.go:84] Creating CNI manager for ""
	I1028 17:25:14.542223   32020 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 17:25:14.543763   32020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 17:25:14.544966   32020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 17:25:14.550724   32020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 17:25:14.550742   32020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 17:25:14.570257   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 17:25:14.924676   32020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 17:25:14.924729   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:14.924751   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619 minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=true
	I1028 17:25:14.954780   32020 ops.go:34] apiserver oom_adj: -16
	I1028 17:25:15.130305   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:15.631369   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.131137   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:16.631423   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.131390   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 17:25:17.226452   32020 kubeadm.go:1113] duration metric: took 2.301774809s to wait for elevateKubeSystemPrivileges
	I1028 17:25:17.226483   32020 kubeadm.go:394] duration metric: took 13.693888567s to StartCluster
	I1028 17:25:17.226504   32020 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.226586   32020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.227504   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:17.227753   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 17:25:17.227749   32020 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:17.227776   32020 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 17:25:17.227845   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:25:17.227858   32020 addons.go:69] Setting storage-provisioner=true in profile "ha-381619"
	I1028 17:25:17.227896   32020 addons.go:234] Setting addon storage-provisioner=true in "ha-381619"
	I1028 17:25:17.227912   32020 addons.go:69] Setting default-storageclass=true in profile "ha-381619"
	I1028 17:25:17.227947   32020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-381619"
	I1028 17:25:17.228016   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:17.227925   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.228398   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228444   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.228490   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.228533   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.243165   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I1028 17:25:17.243382   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I1028 17:25:17.243612   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.243827   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.244081   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244106   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244338   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.244363   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.244419   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244705   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.244874   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.244986   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.245028   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.246886   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:25:17.247245   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 17:25:17.248034   32020 addons.go:234] Setting addon default-storageclass=true in "ha-381619"
	I1028 17:25:17.248080   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:17.248440   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.248495   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.248686   32020 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 17:25:17.259449   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I1028 17:25:17.259906   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.260429   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.260457   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.260757   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.260953   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.262554   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.262967   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I1028 17:25:17.263363   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.263726   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.263747   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.264078   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.264715   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:17.264763   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:17.264944   32020 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 17:25:17.266586   32020 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.266605   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 17:25:17.266623   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.269507   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.269884   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.269905   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.270038   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.270201   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.270351   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.270481   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.279872   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I1028 17:25:17.280334   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:17.280920   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:17.280938   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:17.281336   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:17.281528   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:17.283217   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:17.283405   32020 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.283421   32020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 17:25:17.283436   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:17.285906   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286319   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:17.286352   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:17.286428   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:17.286601   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:17.286754   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:17.286885   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:17.359502   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 17:25:17.440263   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 17:25:17.482707   32020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 17:25:17.757670   32020 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 17:25:17.987134   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987176   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987203   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987222   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987446   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987453   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987512   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987532   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987544   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987486   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.987487   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987697   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987716   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:17.987723   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:17.987752   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.987764   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:17.987811   32020 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 17:25:17.987831   32020 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 17:25:17.987933   32020 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 17:25:17.987946   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:17.987957   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:17.987961   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:17.988187   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:17.988302   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:17.988326   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.005294   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:25:18.006136   32020 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 17:25:18.006153   32020 round_trippers.go:469] Request Headers:
	I1028 17:25:18.006163   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:25:18.006169   32020 round_trippers.go:473]     Content-Type: application/json
	I1028 17:25:18.006173   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:25:18.009564   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:25:18.009782   32020 main.go:141] libmachine: Making call to close driver server
	I1028 17:25:18.009793   32020 main.go:141] libmachine: (ha-381619) Calling .Close
	I1028 17:25:18.010026   32020 main.go:141] libmachine: Successfully made call to close driver server
	I1028 17:25:18.010041   32020 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 17:25:18.010063   32020 main.go:141] libmachine: (ha-381619) DBG | Closing plugin on server side
	I1028 17:25:18.011483   32020 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 17:25:18.012573   32020 addons.go:510] duration metric: took 784.803587ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 17:25:18.012609   32020 start.go:246] waiting for cluster config update ...
	I1028 17:25:18.012623   32020 start.go:255] writing updated cluster config ...
	I1028 17:25:18.013902   32020 out.go:201] 
	I1028 17:25:18.015058   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:18.015120   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.016447   32020 out.go:177] * Starting "ha-381619-m02" control-plane node in "ha-381619" cluster
	I1028 17:25:18.017519   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:25:18.017534   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:25:18.017609   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:25:18.017619   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:25:18.017672   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:18.017831   32020 start.go:360] acquireMachinesLock for ha-381619-m02: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:25:18.017871   32020 start.go:364] duration metric: took 23.784µs to acquireMachinesLock for "ha-381619-m02"
	I1028 17:25:18.017886   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:18.017946   32020 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 17:25:18.019437   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:25:18.019500   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:18.019529   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:18.033319   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I1028 17:25:18.033727   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:18.034182   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:18.034200   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:18.034550   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:18.034715   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:18.034872   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:18.035033   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:25:18.035060   32020 client.go:168] LocalClient.Create starting
	I1028 17:25:18.035096   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:25:18.035126   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035142   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035187   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:25:18.035204   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:25:18.035216   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:25:18.035230   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:25:18.035237   32020 main.go:141] libmachine: (ha-381619-m02) Calling .PreCreateCheck
	I1028 17:25:18.035397   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:18.035746   32020 main.go:141] libmachine: Creating machine...
	I1028 17:25:18.035760   32020 main.go:141] libmachine: (ha-381619-m02) Calling .Create
	I1028 17:25:18.035901   32020 main.go:141] libmachine: (ha-381619-m02) Creating KVM machine...
	I1028 17:25:18.037157   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing default KVM network
	I1028 17:25:18.037313   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found existing private KVM network mk-ha-381619
	I1028 17:25:18.037431   32020 main.go:141] libmachine: (ha-381619-m02) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.037482   32020 main.go:141] libmachine: (ha-381619-m02) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:25:18.037542   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.037441   32379 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.037604   32020 main.go:141] libmachine: (ha-381619-m02) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:25:18.305482   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.305364   32379 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa...
	I1028 17:25:18.398014   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.397913   32379 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk...
	I1028 17:25:18.398067   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing magic tar header
	I1028 17:25:18.398088   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Writing SSH key tar header
	I1028 17:25:18.398095   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:18.398018   32379 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 ...
	I1028 17:25:18.398114   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02
	I1028 17:25:18.398136   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:25:18.398156   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02 (perms=drwx------)
	I1028 17:25:18.398166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:25:18.398180   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:25:18.398187   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:25:18.398194   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:25:18.398201   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Checking permissions on dir: /home
	I1028 17:25:18.398207   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Skipping /home - not owner
	I1028 17:25:18.398217   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:25:18.398254   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:25:18.398277   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:25:18.398289   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:25:18.398304   32020 main.go:141] libmachine: (ha-381619-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:25:18.398338   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:18.399119   32020 main.go:141] libmachine: (ha-381619-m02) define libvirt domain using xml: 
	I1028 17:25:18.399128   32020 main.go:141] libmachine: (ha-381619-m02) <domain type='kvm'>
	I1028 17:25:18.399133   32020 main.go:141] libmachine: (ha-381619-m02)   <name>ha-381619-m02</name>
	I1028 17:25:18.399138   32020 main.go:141] libmachine: (ha-381619-m02)   <memory unit='MiB'>2200</memory>
	I1028 17:25:18.399142   32020 main.go:141] libmachine: (ha-381619-m02)   <vcpu>2</vcpu>
	I1028 17:25:18.399146   32020 main.go:141] libmachine: (ha-381619-m02)   <features>
	I1028 17:25:18.399154   32020 main.go:141] libmachine: (ha-381619-m02)     <acpi/>
	I1028 17:25:18.399160   32020 main.go:141] libmachine: (ha-381619-m02)     <apic/>
	I1028 17:25:18.399167   32020 main.go:141] libmachine: (ha-381619-m02)     <pae/>
	I1028 17:25:18.399171   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399177   32020 main.go:141] libmachine: (ha-381619-m02)   </features>
	I1028 17:25:18.399183   32020 main.go:141] libmachine: (ha-381619-m02)   <cpu mode='host-passthrough'>
	I1028 17:25:18.399188   32020 main.go:141] libmachine: (ha-381619-m02)   
	I1028 17:25:18.399194   32020 main.go:141] libmachine: (ha-381619-m02)   </cpu>
	I1028 17:25:18.399199   32020 main.go:141] libmachine: (ha-381619-m02)   <os>
	I1028 17:25:18.399206   32020 main.go:141] libmachine: (ha-381619-m02)     <type>hvm</type>
	I1028 17:25:18.399211   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='cdrom'/>
	I1028 17:25:18.399223   32020 main.go:141] libmachine: (ha-381619-m02)     <boot dev='hd'/>
	I1028 17:25:18.399234   32020 main.go:141] libmachine: (ha-381619-m02)     <bootmenu enable='no'/>
	I1028 17:25:18.399255   32020 main.go:141] libmachine: (ha-381619-m02)   </os>
	I1028 17:25:18.399268   32020 main.go:141] libmachine: (ha-381619-m02)   <devices>
	I1028 17:25:18.399274   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='cdrom'>
	I1028 17:25:18.399282   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/boot2docker.iso'/>
	I1028 17:25:18.399289   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hdc' bus='scsi'/>
	I1028 17:25:18.399293   32020 main.go:141] libmachine: (ha-381619-m02)       <readonly/>
	I1028 17:25:18.399299   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399305   32020 main.go:141] libmachine: (ha-381619-m02)     <disk type='file' device='disk'>
	I1028 17:25:18.399316   32020 main.go:141] libmachine: (ha-381619-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:25:18.399348   32020 main.go:141] libmachine: (ha-381619-m02)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/ha-381619-m02.rawdisk'/>
	I1028 17:25:18.399365   32020 main.go:141] libmachine: (ha-381619-m02)       <target dev='hda' bus='virtio'/>
	I1028 17:25:18.399403   32020 main.go:141] libmachine: (ha-381619-m02)     </disk>
	I1028 17:25:18.399425   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399439   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='mk-ha-381619'/>
	I1028 17:25:18.399446   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399454   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399464   32020 main.go:141] libmachine: (ha-381619-m02)     <interface type='network'>
	I1028 17:25:18.399473   32020 main.go:141] libmachine: (ha-381619-m02)       <source network='default'/>
	I1028 17:25:18.399483   32020 main.go:141] libmachine: (ha-381619-m02)       <model type='virtio'/>
	I1028 17:25:18.399491   32020 main.go:141] libmachine: (ha-381619-m02)     </interface>
	I1028 17:25:18.399505   32020 main.go:141] libmachine: (ha-381619-m02)     <serial type='pty'>
	I1028 17:25:18.399516   32020 main.go:141] libmachine: (ha-381619-m02)       <target port='0'/>
	I1028 17:25:18.399525   32020 main.go:141] libmachine: (ha-381619-m02)     </serial>
	I1028 17:25:18.399531   32020 main.go:141] libmachine: (ha-381619-m02)     <console type='pty'>
	I1028 17:25:18.399536   32020 main.go:141] libmachine: (ha-381619-m02)       <target type='serial' port='0'/>
	I1028 17:25:18.399544   32020 main.go:141] libmachine: (ha-381619-m02)     </console>
	I1028 17:25:18.399554   32020 main.go:141] libmachine: (ha-381619-m02)     <rng model='virtio'>
	I1028 17:25:18.399564   32020 main.go:141] libmachine: (ha-381619-m02)       <backend model='random'>/dev/random</backend>
	I1028 17:25:18.399578   32020 main.go:141] libmachine: (ha-381619-m02)     </rng>
	I1028 17:25:18.399588   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399596   32020 main.go:141] libmachine: (ha-381619-m02)     
	I1028 17:25:18.399604   32020 main.go:141] libmachine: (ha-381619-m02)   </devices>
	I1028 17:25:18.399613   32020 main.go:141] libmachine: (ha-381619-m02) </domain>
	I1028 17:25:18.399622   32020 main.go:141] libmachine: (ha-381619-m02) 
	I1028 17:25:18.405867   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:26:9b:68 in network default
	I1028 17:25:18.406379   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring networks are active...
	I1028 17:25:18.406395   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:18.407090   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network default is active
	I1028 17:25:18.407385   32020 main.go:141] libmachine: (ha-381619-m02) Ensuring network mk-ha-381619 is active
	I1028 17:25:18.407717   32020 main.go:141] libmachine: (ha-381619-m02) Getting domain xml...
	I1028 17:25:18.408378   32020 main.go:141] libmachine: (ha-381619-m02) Creating domain...
	I1028 17:25:19.597563   32020 main.go:141] libmachine: (ha-381619-m02) Waiting to get IP...
	I1028 17:25:19.598384   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.598740   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.598789   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.598740   32379 retry.go:31] will retry after 190.903064ms: waiting for machine to come up
	I1028 17:25:19.791078   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:19.791557   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:19.791589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:19.791498   32379 retry.go:31] will retry after 306.415198ms: waiting for machine to come up
	I1028 17:25:20.099990   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.100410   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.100438   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.100363   32379 retry.go:31] will retry after 461.052427ms: waiting for machine to come up
	I1028 17:25:20.562787   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.563226   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.563254   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.563181   32379 retry.go:31] will retry after 399.454176ms: waiting for machine to come up
	I1028 17:25:20.964734   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:20.965138   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:20.965168   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:20.965088   32379 retry.go:31] will retry after 468.537228ms: waiting for machine to come up
	I1028 17:25:21.435633   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:21.436036   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:21.436065   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:21.435978   32379 retry.go:31] will retry after 901.623232ms: waiting for machine to come up
	I1028 17:25:22.338882   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:22.339214   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:22.339251   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:22.339170   32379 retry.go:31] will retry after 1.174231376s: waiting for machine to come up
	I1028 17:25:23.514567   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:23.515122   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:23.515148   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:23.515075   32379 retry.go:31] will retry after 1.47285995s: waiting for machine to come up
	I1028 17:25:24.989376   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:24.989742   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:24.989772   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:24.989693   32379 retry.go:31] will retry after 1.395202662s: waiting for machine to come up
	I1028 17:25:26.387051   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:26.387470   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:26.387497   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:26.387419   32379 retry.go:31] will retry after 1.648219706s: waiting for machine to come up
	I1028 17:25:28.036842   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:28.037349   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:28.037375   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:28.037295   32379 retry.go:31] will retry after 2.189322328s: waiting for machine to come up
	I1028 17:25:30.229493   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:30.229820   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:30.229841   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:30.229780   32379 retry.go:31] will retry after 2.90274213s: waiting for machine to come up
	I1028 17:25:33.134730   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:33.135076   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:33.135092   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:33.135034   32379 retry.go:31] will retry after 4.079584337s: waiting for machine to come up
	I1028 17:25:37.219140   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:37.219485   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find current IP address of domain ha-381619-m02 in network mk-ha-381619
	I1028 17:25:37.219505   32020 main.go:141] libmachine: (ha-381619-m02) DBG | I1028 17:25:37.219442   32379 retry.go:31] will retry after 4.856708442s: waiting for machine to come up
	I1028 17:25:42.077346   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077745   32020 main.go:141] libmachine: (ha-381619-m02) Found IP for machine: 192.168.39.171
	I1028 17:25:42.077766   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has current primary IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.077785   32020 main.go:141] libmachine: (ha-381619-m02) Reserving static IP address...
	I1028 17:25:42.078069   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "ha-381619-m02", mac: "52:54:00:ab:1d:c9", ip: "192.168.39.171"} in network mk-ha-381619
	I1028 17:25:42.145216   32020 main.go:141] libmachine: (ha-381619-m02) Reserved static IP address: 192.168.39.171
	I1028 17:25:42.145248   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:42.145256   32020 main.go:141] libmachine: (ha-381619-m02) Waiting for SSH to be available...
	I1028 17:25:42.147449   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:42.147844   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619
	I1028 17:25:42.147868   32020 main.go:141] libmachine: (ha-381619-m02) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:ab:1d:c9
	I1028 17:25:42.148011   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:42.148037   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:42.148079   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:42.148093   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:42.148106   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:42.151405   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:25:42.151422   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:25:42.151430   32020 main.go:141] libmachine: (ha-381619-m02) DBG | command : exit 0
	I1028 17:25:42.151434   32020 main.go:141] libmachine: (ha-381619-m02) DBG | err     : exit status 255
	I1028 17:25:42.151457   32020 main.go:141] libmachine: (ha-381619-m02) DBG | output  : 
	I1028 17:25:45.153548   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Getting to WaitForSSH function...
	I1028 17:25:45.155666   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156001   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.156026   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.156153   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH client type: external
	I1028 17:25:45.156174   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa (-rw-------)
	I1028 17:25:45.156209   32020 main.go:141] libmachine: (ha-381619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:25:45.156220   32020 main.go:141] libmachine: (ha-381619-m02) DBG | About to run SSH command:
	I1028 17:25:45.156228   32020 main.go:141] libmachine: (ha-381619-m02) DBG | exit 0
	I1028 17:25:45.284123   32020 main.go:141] libmachine: (ha-381619-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 17:25:45.284412   32020 main.go:141] libmachine: (ha-381619-m02) KVM machine creation complete!
	I1028 17:25:45.284721   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:45.285293   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285476   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:45.285636   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:25:45.285651   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetState
	I1028 17:25:45.286839   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:25:45.286853   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:25:45.286874   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:25:45.286883   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.289343   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289699   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.289732   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.289877   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.290050   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290180   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.290283   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.290450   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.290659   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.290673   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:25:45.403429   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.403453   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:25:45.403460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.406169   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406520   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.406547   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.406664   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.406833   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.406968   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.407121   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.407274   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.407471   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.407486   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:25:45.516915   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:25:45.516972   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:25:45.516982   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:25:45.516996   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517247   32020 buildroot.go:166] provisioning hostname "ha-381619-m02"
	I1028 17:25:45.517269   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.517419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.520442   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.520895   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.520951   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.521136   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.521306   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521441   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.521550   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.521679   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.521869   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.521885   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m02 && echo "ha-381619-m02" | sudo tee /etc/hostname
	I1028 17:25:45.647896   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m02
	
	I1028 17:25:45.647923   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.650559   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.650915   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.650946   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.651119   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.651299   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651460   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.651606   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.651778   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:45.651948   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:45.651967   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:25:45.773264   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:25:45.773293   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:25:45.773315   32020 buildroot.go:174] setting up certificates
	I1028 17:25:45.773322   32020 provision.go:84] configureAuth start
	I1028 17:25:45.773330   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetMachineName
	I1028 17:25:45.773552   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:45.776602   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.776920   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.776944   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.777092   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.779167   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779415   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.779440   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.779566   32020 provision.go:143] copyHostCerts
	I1028 17:25:45.779590   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779620   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:25:45.779629   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:25:45.779712   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:25:45.779784   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779808   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:25:45.779815   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:25:45.779839   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:25:45.779883   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779899   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:25:45.779905   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:25:45.779925   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:25:45.779969   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m02 san=[127.0.0.1 192.168.39.171 ha-381619-m02 localhost minikube]
	I1028 17:25:45.949948   32020 provision.go:177] copyRemoteCerts
	I1028 17:25:45.950001   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:25:45.950022   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:45.952596   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.952955   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:45.953006   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:45.953158   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:45.953335   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:45.953473   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:45.953584   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.038279   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:25:46.038337   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:25:46.061947   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:25:46.062008   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:25:46.084393   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:25:46.084451   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:25:46.107114   32020 provision.go:87] duration metric: took 333.781683ms to configureAuth
	I1028 17:25:46.107142   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:25:46.107303   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:46.107385   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.110324   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110650   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.110678   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.110841   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.111029   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111171   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.111337   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.111521   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.111668   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.111682   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:25:46.333665   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:25:46.333687   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:25:46.333695   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetURL
	I1028 17:25:46.335063   32020 main.go:141] libmachine: (ha-381619-m02) DBG | Using libvirt version 6000000
	I1028 17:25:46.337491   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.337821   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.337850   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.338022   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:25:46.338038   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:25:46.338046   32020 client.go:171] duration metric: took 28.302974924s to LocalClient.Create
	I1028 17:25:46.338089   32020 start.go:167] duration metric: took 28.303046594s to libmachine.API.Create "ha-381619"
	I1028 17:25:46.338103   32020 start.go:293] postStartSetup for "ha-381619-m02" (driver="kvm2")
	I1028 17:25:46.338115   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:25:46.338137   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.338375   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:25:46.338401   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.340858   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341271   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.341298   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.341419   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.341568   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.341713   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.341825   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.426689   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:25:46.431014   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:25:46.431038   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:25:46.431111   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:25:46.431208   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:25:46.431224   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:25:46.431391   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:25:46.440073   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:46.463120   32020 start.go:296] duration metric: took 125.005816ms for postStartSetup
	I1028 17:25:46.463168   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetConfigRaw
	I1028 17:25:46.463762   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.466198   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466494   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.466531   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.466725   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:25:46.466921   32020 start.go:128] duration metric: took 28.448963909s to createHost
	I1028 17:25:46.466949   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.469249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469565   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.469589   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.469704   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.469861   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.469984   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.470143   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.470307   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:25:46.470485   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I1028 17:25:46.470498   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:25:46.580856   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136346.562587281
	
	I1028 17:25:46.580878   32020 fix.go:216] guest clock: 1730136346.562587281
	I1028 17:25:46.580887   32020 fix.go:229] Guest: 2024-10-28 17:25:46.562587281 +0000 UTC Remote: 2024-10-28 17:25:46.466934782 +0000 UTC m=+73.797903078 (delta=95.652499ms)
	I1028 17:25:46.580901   32020 fix.go:200] guest clock delta is within tolerance: 95.652499ms
	I1028 17:25:46.580907   32020 start.go:83] releasing machines lock for "ha-381619-m02", held for 28.563026837s
	I1028 17:25:46.580924   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.581186   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:46.583856   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.584218   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.584249   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.586494   32020 out.go:177] * Found network options:
	I1028 17:25:46.587894   32020 out.go:177]   - NO_PROXY=192.168.39.230
	W1028 17:25:46.589029   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589070   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589532   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589694   32020 main.go:141] libmachine: (ha-381619-m02) Calling .DriverName
	I1028 17:25:46.589788   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:25:46.589827   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	W1028 17:25:46.589854   32020 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 17:25:46.589924   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:25:46.589942   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHHostname
	I1028 17:25:46.592456   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592681   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592853   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.592873   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.592998   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593129   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593166   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:46.593189   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:46.593257   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593327   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHPort
	I1028 17:25:46.593495   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHKeyPath
	I1028 17:25:46.593488   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.593663   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetSSHUsername
	I1028 17:25:46.593796   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m02/id_rsa Username:docker}
	I1028 17:25:46.834104   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:25:46.840249   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:25:46.840309   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:25:46.857442   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:25:46.857462   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:25:46.857520   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:25:46.874062   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:25:46.887622   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:25:46.887678   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:25:46.901054   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:25:46.914614   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:25:47.030203   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:25:47.173397   32020 docker.go:233] disabling docker service ...
	I1028 17:25:47.173471   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:25:47.187602   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:25:47.200124   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:25:47.343002   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:25:47.463446   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:25:47.477391   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:25:47.495284   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:25:47.495336   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.505232   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:25:47.505290   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.515205   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.524903   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.534665   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:25:47.544548   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.554185   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.570492   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:25:47.580150   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:25:47.588959   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:25:47.588998   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:25:47.602144   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:25:47.611274   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:47.728237   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:25:47.819661   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:25:47.819739   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:25:47.825086   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:25:47.825133   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:25:47.828919   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:25:47.865608   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:25:47.865686   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.891971   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:25:47.920487   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:25:47.921941   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:25:47.923245   32020 main.go:141] libmachine: (ha-381619-m02) Calling .GetIP
	I1028 17:25:47.926002   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926296   32020 main.go:141] libmachine: (ha-381619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:1d:c9", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:25:32 +0000 UTC Type:0 Mac:52:54:00:ab:1d:c9 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-381619-m02 Clientid:01:52:54:00:ab:1d:c9}
	I1028 17:25:47.926314   32020 main.go:141] libmachine: (ha-381619-m02) DBG | domain ha-381619-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:ab:1d:c9 in network mk-ha-381619
	I1028 17:25:47.926539   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:25:47.930572   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:47.943132   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:25:47.943291   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:25:47.943533   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.943566   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.957947   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I1028 17:25:47.958254   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.958709   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.958727   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.959022   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.959199   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:25:47.960488   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:47.960756   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:47.960791   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:47.974636   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I1028 17:25:47.975037   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:47.975478   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:47.975496   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:47.975773   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:47.975952   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:47.976140   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.171
	I1028 17:25:47.976153   32020 certs.go:194] generating shared ca certs ...
	I1028 17:25:47.976170   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:47.976307   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:25:47.976364   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:25:47.976377   32020 certs.go:256] generating profile certs ...
	I1028 17:25:47.976489   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:25:47.976518   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6
	I1028 17:25:47.976537   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.254]
	I1028 17:25:48.173298   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 ...
	I1028 17:25:48.173326   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6: {Name:mkf5ce350ef4737e80e11fe080b891074a0af9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173482   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 ...
	I1028 17:25:48.173493   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6: {Name:mk4892e87f7052cc8a58e00369d3170cecec3e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:25:48.173560   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:25:48.173681   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.47ad21f6 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:25:48.173810   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:25:48.173826   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:25:48.173840   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:25:48.173854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:25:48.173866   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:25:48.173879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:25:48.173891   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:25:48.173902   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:25:48.173913   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:25:48.173957   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:25:48.173999   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:25:48.174009   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:25:48.174030   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:25:48.174051   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:25:48.174071   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:25:48.174117   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:25:48.174144   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.174158   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.174169   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.174198   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:48.177148   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177545   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:48.177579   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:48.177737   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:48.177910   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:48.178048   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:48.178158   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:48.248817   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:25:48.254098   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:25:48.264499   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:25:48.268575   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:25:48.278929   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:25:48.283180   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:25:48.292856   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:25:48.296876   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:25:48.306132   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:25:48.310003   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:25:48.319418   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:25:48.323887   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:25:48.335408   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:25:48.360541   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:25:48.384095   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:25:48.407120   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:25:48.429601   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 17:25:48.452108   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:25:48.474717   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:25:48.497519   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:25:48.519884   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:25:48.542530   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:25:48.565246   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:25:48.587411   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:25:48.603353   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:25:48.618794   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:25:48.634198   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:25:48.649902   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:25:48.665540   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:25:48.680907   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:25:48.697446   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:25:48.703204   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:25:48.713589   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718016   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.718162   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:25:48.723740   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:25:48.734297   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:25:48.744539   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748653   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.748709   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:25:48.754164   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:25:48.764209   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:25:48.774379   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778691   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.778734   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:25:48.784288   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:25:48.794987   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:25:48.799006   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:25:48.799053   32020 kubeadm.go:934] updating node {m02 192.168.39.171 8443 v1.31.2 crio true true} ...
	I1028 17:25:48.799121   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:25:48.799142   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:25:48.799168   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:25:48.823470   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:25:48.823527   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:25:48.823569   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.835145   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:25:48.835188   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:25:48.844460   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:25:48.844491   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844545   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:25:48.844552   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 17:25:48.844586   32020 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 17:25:48.848931   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:25:48.848960   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:25:49.845765   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.845846   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:25:49.851022   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:25:49.851049   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:25:49.995196   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:25:50.018003   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.018112   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:25:50.028108   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:25:50.028154   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:25:50.413235   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:25:50.422462   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 17:25:50.439863   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:25:50.457114   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:25:50.474256   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:25:50.477946   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:25:50.489942   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:25:50.615829   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:25:50.634721   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:25:50.635033   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:25:50.635082   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:25:50.649391   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I1028 17:25:50.649767   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:25:50.650191   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:25:50.650209   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:25:50.650503   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:25:50.650660   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:25:50.650788   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:25:50.650874   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:25:50.650889   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:25:50.653655   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654061   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:25:50.654087   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:25:50.654224   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:25:50.654401   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:25:50.654535   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:25:50.654636   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:25:50.789658   32020 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:25:50.789699   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443"
	I1028 17:26:12.167714   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mv9caz.1zql23j8gw9y6cks --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m02 --control-plane --apiserver-advertise-address=192.168.39.171 --apiserver-bind-port=8443": (21.377987897s)
	I1028 17:26:12.167759   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:26:12.604075   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m02 minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:26:12.730286   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:26:12.839048   32020 start.go:319] duration metric: took 22.188254958s to joinCluster
	I1028 17:26:12.839133   32020 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:12.839439   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:12.840330   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:26:12.841472   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:26:13.041048   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:26:13.058928   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:26:13.059251   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:26:13.059331   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:26:13.059574   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:13.059667   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.059677   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.059688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.059694   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.077343   32020 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 17:26:13.560169   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:13.560188   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:13.560196   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:13.560200   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:13.573882   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:14.060794   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.060818   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.060828   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.060835   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.068335   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:14.560535   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:14.560554   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:14.560562   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:14.560567   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:14.564008   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:15.060016   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.060055   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.060066   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.060072   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.064096   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:15.064637   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:15.559999   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:15.560030   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:15.560041   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:15.560046   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:15.563431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.059828   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.059852   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.059862   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.059867   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.063732   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:16.560697   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:16.560722   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:16.560733   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:16.560739   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:16.564261   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:17.060671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.060698   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.060711   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.060718   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.064995   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:17.066041   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:17.560713   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:17.560732   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:17.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:17.560749   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:17.563531   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:18.060093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.060116   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.060127   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.060135   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.064122   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:18.559857   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:18.559879   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:18.559887   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:18.559898   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:18.563832   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:19.059842   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.059867   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.059879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.059884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.065030   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:19.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:19.559871   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:19.559879   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:19.559884   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:19.562800   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:19.563587   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:20.059873   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.059895   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.059905   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.059912   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.073315   32020 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 17:26:20.560212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:20.560231   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:20.560239   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:20.560243   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:20.650492   32020 round_trippers.go:574] Response Status: 200 OK in 90 milliseconds
	I1028 17:26:21.059937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.059963   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.059974   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.059979   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.064508   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:21.560559   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:21.560581   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:21.560590   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:21.560594   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:21.563714   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:21.564443   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:22.059724   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.059744   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.059752   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.059757   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.063391   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:22.560710   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:22.560731   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:22.560738   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:22.560742   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:22.563846   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.060524   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.060544   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.060554   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.060561   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.064448   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:23.560417   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:23.560438   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:23.560447   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:23.560451   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:23.563535   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.060636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.060664   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.060675   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.060683   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.064043   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:24.064451   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:24.559868   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:24.559899   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:24.559907   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:24.559910   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:24.562925   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:25.059880   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.059902   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.059910   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.059915   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.063972   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:25.559872   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:25.559894   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:25.559901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:25.559905   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:25.563081   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:26.060748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.060770   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.060782   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.060788   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.064990   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:26.065576   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:26.559841   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:26.559863   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:26.559871   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:26.559876   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:26.562740   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:27.059746   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.059768   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.059775   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.059779   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.063135   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:27.560126   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:27.560145   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:27.560153   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:27.560158   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:27.563096   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:28.060723   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.060746   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.060757   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.060763   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.065003   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:28.560732   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:28.560757   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:28.560767   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:28.560774   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:28.563965   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:28.564617   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:29.059876   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.059903   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.059914   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.059919   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.067282   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:29.559851   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:29.559872   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:29.559880   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:29.559883   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:29.562804   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:30.059831   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.059853   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.059867   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.059875   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.063855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:30.560631   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:30.560653   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:30.560665   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:30.560670   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:30.563630   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:31.059907   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.059925   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.059933   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.059938   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.064319   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:31.065078   32020 node_ready.go:53] node "ha-381619-m02" has status "Ready":"False"
	I1028 17:26:31.560248   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:31.560271   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:31.560278   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:31.560282   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:31.563146   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:32.059755   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.059779   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.059790   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.059796   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.065145   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:32.560006   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:32.560026   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:32.560034   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:32.560038   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:32.563453   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.060614   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.060633   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.060641   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.060647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.064544   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.066373   32020 node_ready.go:49] node "ha-381619-m02" has status "Ready":"True"
	I1028 17:26:33.066389   32020 node_ready.go:38] duration metric: took 20.006796944s for node "ha-381619-m02" to be "Ready" ...
	I1028 17:26:33.066397   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:33.066462   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:33.066470   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.066477   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.066482   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.074203   32020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 17:26:33.082515   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.082586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:26:33.082595   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.082602   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.082607   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.095144   32020 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 17:26:33.095832   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.095846   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.095854   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.095858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.101134   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:26:33.101733   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.101757   32020 pod_ready.go:82] duration metric: took 19.21928ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101770   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.101833   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:26:33.101844   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.101853   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.101858   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.105945   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.108337   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.108355   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.108367   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.108372   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.113026   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.113662   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.113683   32020 pod_ready.go:82] duration metric: took 11.906137ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113694   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.113752   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:26:33.113762   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.113774   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.113782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.123002   32020 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 17:26:33.123632   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.123647   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.123654   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.123658   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.127965   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.128570   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.128593   32020 pod_ready.go:82] duration metric: took 14.890353ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128604   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.128669   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:26:33.128680   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.128690   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.128695   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.132736   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.133266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.133282   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.133291   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.133297   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.135365   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.135735   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.135750   32020 pod_ready.go:82] duration metric: took 7.136636ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.135762   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.261122   32020 request.go:632] Waited for 125.293136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261209   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:26:33.261217   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.261226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.261234   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.263967   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:33.461031   32020 request.go:632] Waited for 196.380501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461114   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:33.461126   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.461137   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.461148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.465245   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:33.465839   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.465854   32020 pod_ready.go:82] duration metric: took 330.085581ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.465863   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.661130   32020 request.go:632] Waited for 195.210858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:26:33.661218   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.661226   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.661231   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.664592   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.861613   32020 request.go:632] Waited for 196.398754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:33.861693   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:33.861703   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:33.861708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:33.865300   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:33.865923   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:33.865943   32020 pod_ready.go:82] duration metric: took 400.074085ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:33.865954   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.061082   32020 request.go:632] Waited for 195.035949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061146   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:26:34.061154   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.061164   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.061177   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.065243   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:34.261295   32020 request.go:632] Waited for 195.377372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261362   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:34.261369   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.261377   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.261384   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.264122   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:34.264806   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.264824   32020 pod_ready.go:82] duration metric: took 398.860925ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.264834   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.461015   32020 request.go:632] Waited for 196.107238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461086   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:26:34.461092   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.461099   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.461107   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.464532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.661679   32020 request.go:632] Waited for 196.369344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661748   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:34.661755   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.661763   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.661769   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.664905   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:34.665450   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:34.665471   32020 pod_ready.go:82] duration metric: took 400.628457ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.665485   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:34.861555   32020 request.go:632] Waited for 195.998426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861607   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:26:34.861612   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:34.861619   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:34.861625   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:34.865054   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.061002   32020 request.go:632] Waited for 195.260133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061074   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.061081   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.061090   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.061103   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.067316   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:35.067855   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.067872   32020 pod_ready.go:82] duration metric: took 402.381503ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.067883   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.261021   32020 request.go:632] Waited for 193.06469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261075   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:26:35.261080   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.261087   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.261091   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.264532   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.461647   32020 request.go:632] Waited for 196.379594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461699   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:35.461704   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.461712   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.461716   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.464708   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:26:35.465310   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.465326   32020 pod_ready.go:82] duration metric: took 397.438256ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.465336   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.660832   32020 request.go:632] Waited for 195.429914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660887   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:26:35.660892   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.660901   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.660906   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.664825   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.861091   32020 request.go:632] Waited for 195.400527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861176   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:26:35.861185   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:35.861193   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:35.861199   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:35.864874   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:35.865496   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:35.865512   32020 pod_ready.go:82] duration metric: took 400.170514ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:35.865524   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.061640   32020 request.go:632] Waited for 196.040174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:26:36.061702   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.061709   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.061712   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.067912   32020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 17:26:36.260741   32020 request.go:632] Waited for 192.270672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260796   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:26:36.260801   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.260808   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.260811   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.264431   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.265062   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:26:36.265078   32020 pod_ready.go:82] duration metric: took 399.548106ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:26:36.265089   32020 pod_ready.go:39] duration metric: took 3.19868237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:26:36.265105   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:26:36.265162   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:26:36.280395   32020 api_server.go:72] duration metric: took 23.441229274s to wait for apiserver process to appear ...
	I1028 17:26:36.280422   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:26:36.280444   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:26:36.284951   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:26:36.285015   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:26:36.285023   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.285030   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.285034   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.285954   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:26:36.286036   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:26:36.286049   32020 api_server.go:131] duration metric: took 5.621129ms to wait for apiserver health ...
	I1028 17:26:36.286055   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:26:36.461480   32020 request.go:632] Waited for 175.36266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461560   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.461566   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.461573   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.461579   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.465870   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.471332   32020 system_pods.go:59] 17 kube-system pods found
	I1028 17:26:36.471364   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.471372   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.471378   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.471384   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.471389   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.471394   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.471398   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.471404   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.471410   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.471415   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.471420   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.471423   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.471427   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.471431   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.471439   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.471443   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.471447   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.471452   32020 system_pods.go:74] duration metric: took 185.392371ms to wait for pod list to return data ...
	I1028 17:26:36.471461   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:26:36.660798   32020 request.go:632] Waited for 189.265217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660858   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:26:36.660865   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.660876   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.660890   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.664250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:26:36.664492   32020 default_sa.go:45] found service account: "default"
	I1028 17:26:36.664512   32020 default_sa.go:55] duration metric: took 193.044588ms for default service account to be created ...
	I1028 17:26:36.664525   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:26:36.860686   32020 request.go:632] Waited for 196.070222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860774   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:26:36.860785   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:36.860796   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:36.860806   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:36.865017   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:36.869263   32020 system_pods.go:86] 17 kube-system pods found
	I1028 17:26:36.869283   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:26:36.869289   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:26:36.869294   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:26:36.869300   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:26:36.869305   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:26:36.869318   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:26:36.869324   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:26:36.869332   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:26:36.869341   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:26:36.869344   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:26:36.869348   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:26:36.869351   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:26:36.869355   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:26:36.869359   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:26:36.869362   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:26:36.869368   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:26:36.869371   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:26:36.869378   32020 system_pods.go:126] duration metric: took 204.847439ms to wait for k8s-apps to be running ...
	I1028 17:26:36.869387   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:26:36.869438   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:26:36.887558   32020 system_svc.go:56] duration metric: took 18.164041ms WaitForService to wait for kubelet
	I1028 17:26:36.887583   32020 kubeadm.go:582] duration metric: took 24.048418465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:26:36.887603   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:26:37.061041   32020 request.go:632] Waited for 173.358173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061125   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:26:37.061137   32020 round_trippers.go:469] Request Headers:
	I1028 17:26:37.061147   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:26:37.061157   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:26:37.065908   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:26:37.066717   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066739   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066750   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:26:37.066754   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:26:37.066758   32020 node_conditions.go:105] duration metric: took 179.146781ms to run NodePressure ...
	I1028 17:26:37.066780   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:26:37.066813   32020 start.go:255] writing updated cluster config ...
	I1028 17:26:37.068764   32020 out.go:201] 
	I1028 17:26:37.070024   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:26:37.070105   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.071682   32020 out.go:177] * Starting "ha-381619-m03" control-plane node in "ha-381619" cluster
	I1028 17:26:37.072951   32020 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:26:37.072974   32020 cache.go:56] Caching tarball of preloaded images
	I1028 17:26:37.073061   32020 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:26:37.073071   32020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:26:37.073157   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:26:37.073328   32020 start.go:360] acquireMachinesLock for ha-381619-m03: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:26:37.073367   32020 start.go:364] duration metric: took 22.448µs to acquireMachinesLock for "ha-381619-m03"
	I1028 17:26:37.073383   32020 start.go:93] Provisioning new machine with config: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:26:37.073468   32020 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 17:26:37.074992   32020 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 17:26:37.075063   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:26:37.075098   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:26:37.089635   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I1028 17:26:37.090045   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:26:37.090591   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:26:37.090617   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:26:37.090932   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:26:37.091131   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:26:37.091290   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:26:37.091438   32020 start.go:159] libmachine.API.Create for "ha-381619" (driver="kvm2")
	I1028 17:26:37.091470   32020 client.go:168] LocalClient.Create starting
	I1028 17:26:37.091506   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 17:26:37.091543   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091562   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091624   32020 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 17:26:37.091649   32020 main.go:141] libmachine: Decoding PEM data...
	I1028 17:26:37.091665   32020 main.go:141] libmachine: Parsing certificate...
	I1028 17:26:37.091691   32020 main.go:141] libmachine: Running pre-create checks...
	I1028 17:26:37.091702   32020 main.go:141] libmachine: (ha-381619-m03) Calling .PreCreateCheck
	I1028 17:26:37.091853   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:26:37.092216   32020 main.go:141] libmachine: Creating machine...
	I1028 17:26:37.092231   32020 main.go:141] libmachine: (ha-381619-m03) Calling .Create
	I1028 17:26:37.092346   32020 main.go:141] libmachine: (ha-381619-m03) Creating KVM machine...
	I1028 17:26:37.093689   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing default KVM network
	I1028 17:26:37.093825   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found existing private KVM network mk-ha-381619
	I1028 17:26:37.094015   32020 main.go:141] libmachine: (ha-381619-m03) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.094041   32020 main.go:141] libmachine: (ha-381619-m03) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:26:37.094128   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.093979   32807 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.094183   32020 main.go:141] libmachine: (ha-381619-m03) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 17:26:37.334476   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.334350   32807 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa...
	I1028 17:26:37.512343   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512238   32807 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk...
	I1028 17:26:37.512368   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing magic tar header
	I1028 17:26:37.512408   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Writing SSH key tar header
	I1028 17:26:37.512432   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:37.512349   32807 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 ...
	I1028 17:26:37.512450   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03
	I1028 17:26:37.512458   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 17:26:37.512478   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:26:37.512486   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 17:26:37.512517   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 17:26:37.512536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 17:26:37.512545   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03 (perms=drwx------)
	I1028 17:26:37.512553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Checking permissions on dir: /home
	I1028 17:26:37.512565   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 17:26:37.512581   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 17:26:37.512594   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 17:26:37.512609   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 17:26:37.512619   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Skipping /home - not owner
	I1028 17:26:37.512629   32020 main.go:141] libmachine: (ha-381619-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 17:26:37.512638   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:37.513512   32020 main.go:141] libmachine: (ha-381619-m03) define libvirt domain using xml: 
	I1028 17:26:37.513530   32020 main.go:141] libmachine: (ha-381619-m03) <domain type='kvm'>
	I1028 17:26:37.513546   32020 main.go:141] libmachine: (ha-381619-m03)   <name>ha-381619-m03</name>
	I1028 17:26:37.513552   32020 main.go:141] libmachine: (ha-381619-m03)   <memory unit='MiB'>2200</memory>
	I1028 17:26:37.513557   32020 main.go:141] libmachine: (ha-381619-m03)   <vcpu>2</vcpu>
	I1028 17:26:37.513561   32020 main.go:141] libmachine: (ha-381619-m03)   <features>
	I1028 17:26:37.513566   32020 main.go:141] libmachine: (ha-381619-m03)     <acpi/>
	I1028 17:26:37.513572   32020 main.go:141] libmachine: (ha-381619-m03)     <apic/>
	I1028 17:26:37.513577   32020 main.go:141] libmachine: (ha-381619-m03)     <pae/>
	I1028 17:26:37.513584   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513589   32020 main.go:141] libmachine: (ha-381619-m03)   </features>
	I1028 17:26:37.513595   32020 main.go:141] libmachine: (ha-381619-m03)   <cpu mode='host-passthrough'>
	I1028 17:26:37.513600   32020 main.go:141] libmachine: (ha-381619-m03)   
	I1028 17:26:37.513606   32020 main.go:141] libmachine: (ha-381619-m03)   </cpu>
	I1028 17:26:37.513611   32020 main.go:141] libmachine: (ha-381619-m03)   <os>
	I1028 17:26:37.513617   32020 main.go:141] libmachine: (ha-381619-m03)     <type>hvm</type>
	I1028 17:26:37.513622   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='cdrom'/>
	I1028 17:26:37.513630   32020 main.go:141] libmachine: (ha-381619-m03)     <boot dev='hd'/>
	I1028 17:26:37.513634   32020 main.go:141] libmachine: (ha-381619-m03)     <bootmenu enable='no'/>
	I1028 17:26:37.513638   32020 main.go:141] libmachine: (ha-381619-m03)   </os>
	I1028 17:26:37.513643   32020 main.go:141] libmachine: (ha-381619-m03)   <devices>
	I1028 17:26:37.513647   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='cdrom'>
	I1028 17:26:37.513655   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/boot2docker.iso'/>
	I1028 17:26:37.513660   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hdc' bus='scsi'/>
	I1028 17:26:37.513664   32020 main.go:141] libmachine: (ha-381619-m03)       <readonly/>
	I1028 17:26:37.513668   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513673   32020 main.go:141] libmachine: (ha-381619-m03)     <disk type='file' device='disk'>
	I1028 17:26:37.513679   32020 main.go:141] libmachine: (ha-381619-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 17:26:37.513689   32020 main.go:141] libmachine: (ha-381619-m03)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/ha-381619-m03.rawdisk'/>
	I1028 17:26:37.513697   32020 main.go:141] libmachine: (ha-381619-m03)       <target dev='hda' bus='virtio'/>
	I1028 17:26:37.513728   32020 main.go:141] libmachine: (ha-381619-m03)     </disk>
	I1028 17:26:37.513752   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513762   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='mk-ha-381619'/>
	I1028 17:26:37.513777   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513799   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513818   32020 main.go:141] libmachine: (ha-381619-m03)     <interface type='network'>
	I1028 17:26:37.513832   32020 main.go:141] libmachine: (ha-381619-m03)       <source network='default'/>
	I1028 17:26:37.513842   32020 main.go:141] libmachine: (ha-381619-m03)       <model type='virtio'/>
	I1028 17:26:37.513850   32020 main.go:141] libmachine: (ha-381619-m03)     </interface>
	I1028 17:26:37.513860   32020 main.go:141] libmachine: (ha-381619-m03)     <serial type='pty'>
	I1028 17:26:37.513868   32020 main.go:141] libmachine: (ha-381619-m03)       <target port='0'/>
	I1028 17:26:37.513877   32020 main.go:141] libmachine: (ha-381619-m03)     </serial>
	I1028 17:26:37.513888   32020 main.go:141] libmachine: (ha-381619-m03)     <console type='pty'>
	I1028 17:26:37.513899   32020 main.go:141] libmachine: (ha-381619-m03)       <target type='serial' port='0'/>
	I1028 17:26:37.513908   32020 main.go:141] libmachine: (ha-381619-m03)     </console>
	I1028 17:26:37.513919   32020 main.go:141] libmachine: (ha-381619-m03)     <rng model='virtio'>
	I1028 17:26:37.513932   32020 main.go:141] libmachine: (ha-381619-m03)       <backend model='random'>/dev/random</backend>
	I1028 17:26:37.513941   32020 main.go:141] libmachine: (ha-381619-m03)     </rng>
	I1028 17:26:37.513954   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513965   32020 main.go:141] libmachine: (ha-381619-m03)     
	I1028 17:26:37.513971   32020 main.go:141] libmachine: (ha-381619-m03)   </devices>
	I1028 17:26:37.513978   32020 main.go:141] libmachine: (ha-381619-m03) </domain>
	I1028 17:26:37.513992   32020 main.go:141] libmachine: (ha-381619-m03) 
	I1028 17:26:37.520796   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:6b:b8:f1 in network default
	I1028 17:26:37.521360   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring networks are active...
	I1028 17:26:37.521387   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:37.521985   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network default is active
	I1028 17:26:37.522251   32020 main.go:141] libmachine: (ha-381619-m03) Ensuring network mk-ha-381619 is active
	I1028 17:26:37.522562   32020 main.go:141] libmachine: (ha-381619-m03) Getting domain xml...
	I1028 17:26:37.523108   32020 main.go:141] libmachine: (ha-381619-m03) Creating domain...
	I1028 17:26:38.733507   32020 main.go:141] libmachine: (ha-381619-m03) Waiting to get IP...
	I1028 17:26:38.734445   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:38.734847   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:38.734874   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:38.734831   32807 retry.go:31] will retry after 277.511241ms: waiting for machine to come up
	I1028 17:26:39.014311   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.014705   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.014731   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.014657   32807 retry.go:31] will retry after 249.568431ms: waiting for machine to come up
	I1028 17:26:39.266003   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.266417   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.266438   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.266379   32807 retry.go:31] will retry after 332.313659ms: waiting for machine to come up
	I1028 17:26:39.599811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:39.600199   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:39.600224   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:39.600155   32807 retry.go:31] will retry after 498.320063ms: waiting for machine to come up
	I1028 17:26:40.099601   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.100068   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.100102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.100010   32807 retry.go:31] will retry after 620.508522ms: waiting for machine to come up
	I1028 17:26:40.721631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:40.722075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:40.722102   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:40.722032   32807 retry.go:31] will retry after 786.320854ms: waiting for machine to come up
	I1028 17:26:41.509664   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:41.510180   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:41.510208   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:41.510141   32807 retry.go:31] will retry after 1.021116287s: waiting for machine to come up
	I1028 17:26:42.532494   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:42.532913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:42.532943   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:42.532860   32807 retry.go:31] will retry after 1.335656065s: waiting for machine to come up
	I1028 17:26:43.870447   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:43.870913   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:43.870940   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:43.870865   32807 retry.go:31] will retry after 1.720265412s: waiting for machine to come up
	I1028 17:26:45.593694   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:45.594300   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:45.594326   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:45.594243   32807 retry.go:31] will retry after 1.629048478s: waiting for machine to come up
	I1028 17:26:47.224808   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:47.225182   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:47.225207   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:47.225159   32807 retry.go:31] will retry after 2.592881751s: waiting for machine to come up
	I1028 17:26:49.819232   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:49.819722   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:49.819742   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:49.819691   32807 retry.go:31] will retry after 2.406064511s: waiting for machine to come up
	I1028 17:26:52.227365   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:52.227723   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:52.227744   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:52.227706   32807 retry.go:31] will retry after 4.047640597s: waiting for machine to come up
	I1028 17:26:56.276662   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:26:56.277135   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find current IP address of domain ha-381619-m03 in network mk-ha-381619
	I1028 17:26:56.277158   32020 main.go:141] libmachine: (ha-381619-m03) DBG | I1028 17:26:56.277104   32807 retry.go:31] will retry after 4.243512083s: waiting for machine to come up
	I1028 17:27:00.523220   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523671   32020 main.go:141] libmachine: (ha-381619-m03) Found IP for machine: 192.168.39.17
	I1028 17:27:00.523698   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has current primary IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.523706   32020 main.go:141] libmachine: (ha-381619-m03) Reserving static IP address...
	I1028 17:27:00.524025   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "ha-381619-m03", mac: "52:54:00:d7:8c:62", ip: "192.168.39.17"} in network mk-ha-381619
	I1028 17:27:00.592781   32020 main.go:141] libmachine: (ha-381619-m03) Reserved static IP address: 192.168.39.17
	I1028 17:27:00.592808   32020 main.go:141] libmachine: (ha-381619-m03) Waiting for SSH to be available...
	I1028 17:27:00.592817   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:00.595728   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:00.595996   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619
	I1028 17:27:00.596032   32020 main.go:141] libmachine: (ha-381619-m03) DBG | unable to find defined IP address of network mk-ha-381619 interface with MAC address 52:54:00:d7:8c:62
	I1028 17:27:00.596173   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:00.596195   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:00.596242   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:00.596266   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:00.596292   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:00.599869   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 17:27:00.599886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 17:27:00.599893   32020 main.go:141] libmachine: (ha-381619-m03) DBG | command : exit 0
	I1028 17:27:00.599897   32020 main.go:141] libmachine: (ha-381619-m03) DBG | err     : exit status 255
	I1028 17:27:00.599912   32020 main.go:141] libmachine: (ha-381619-m03) DBG | output  : 
	I1028 17:27:03.600719   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Getting to WaitForSSH function...
	I1028 17:27:03.602993   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603307   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.603342   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.603475   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH client type: external
	I1028 17:27:03.603507   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa (-rw-------)
	I1028 17:27:03.603540   32020 main.go:141] libmachine: (ha-381619-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 17:27:03.603558   32020 main.go:141] libmachine: (ha-381619-m03) DBG | About to run SSH command:
	I1028 17:27:03.603573   32020 main.go:141] libmachine: (ha-381619-m03) DBG | exit 0
	I1028 17:27:03.732419   32020 main.go:141] libmachine: (ha-381619-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 17:27:03.732661   32020 main.go:141] libmachine: (ha-381619-m03) KVM machine creation complete!
	I1028 17:27:03.732966   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:03.733514   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733669   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:03.733799   32020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 17:27:03.733816   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetState
	I1028 17:27:03.734895   32020 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 17:27:03.734910   32020 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 17:27:03.734928   32020 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 17:27:03.734939   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.737530   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.737905   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.737933   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.738103   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.738238   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738419   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.738528   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.738669   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.738865   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.738879   32020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 17:27:03.843630   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:03.843655   32020 main.go:141] libmachine: Detecting the provisioner...
	I1028 17:27:03.843666   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.846510   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.846865   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.846886   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.847091   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.847261   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847412   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.847510   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.847671   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.847870   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.847884   32020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 17:27:03.953430   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 17:27:03.953486   32020 main.go:141] libmachine: found compatible host: buildroot
	I1028 17:27:03.953497   32020 main.go:141] libmachine: Provisioning with buildroot...
	I1028 17:27:03.953508   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.953779   32020 buildroot.go:166] provisioning hostname "ha-381619-m03"
	I1028 17:27:03.953819   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:03.954012   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:03.956989   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957430   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:03.957456   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:03.957613   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:03.957773   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.957930   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:03.958072   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:03.958232   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:03.958456   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:03.958476   32020 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619-m03 && echo "ha-381619-m03" | sudo tee /etc/hostname
	I1028 17:27:04.082564   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619-m03
	
	I1028 17:27:04.082596   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.085190   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085543   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.085567   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.085806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.085952   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.086175   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.086298   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.086473   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.086494   32020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:27:04.201141   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:27:04.201171   32020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:27:04.201191   32020 buildroot.go:174] setting up certificates
	I1028 17:27:04.201204   32020 provision.go:84] configureAuth start
	I1028 17:27:04.201213   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetMachineName
	I1028 17:27:04.201449   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.204201   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204631   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.204661   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.204749   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.206751   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.207092   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.207247   32020 provision.go:143] copyHostCerts
	I1028 17:27:04.207276   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207314   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:27:04.207334   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:27:04.207429   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:27:04.207519   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207543   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:27:04.207552   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:27:04.207589   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:27:04.207646   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207670   32020 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:27:04.207679   32020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:27:04.207710   32020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:27:04.207772   32020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619-m03 san=[127.0.0.1 192.168.39.17 ha-381619-m03 localhost minikube]
	I1028 17:27:04.311071   32020 provision.go:177] copyRemoteCerts
	I1028 17:27:04.311121   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:27:04.311145   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.313577   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.313977   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.314019   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.314168   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.314347   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.314472   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.314623   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.403135   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:27:04.403211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:27:04.427834   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:27:04.427894   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 17:27:04.450833   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:27:04.450900   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:27:04.473452   32020 provision.go:87] duration metric: took 272.234677ms to configureAuth
	I1028 17:27:04.473476   32020 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:27:04.473653   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:04.473713   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.476526   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.476861   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.476881   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.477065   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.477235   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477353   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.477466   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.477631   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.477821   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.477837   32020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:27:04.708532   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:27:04.708562   32020 main.go:141] libmachine: Checking connection to Docker...
	I1028 17:27:04.708571   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetURL
	I1028 17:27:04.709704   32020 main.go:141] libmachine: (ha-381619-m03) DBG | Using libvirt version 6000000
	I1028 17:27:04.711553   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.711850   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.711877   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.712051   32020 main.go:141] libmachine: Docker is up and running!
	I1028 17:27:04.712065   32020 main.go:141] libmachine: Reticulating splines...
	I1028 17:27:04.712074   32020 client.go:171] duration metric: took 27.620592933s to LocalClient.Create
	I1028 17:27:04.712101   32020 start.go:167] duration metric: took 27.620663816s to libmachine.API.Create "ha-381619"
	I1028 17:27:04.712114   32020 start.go:293] postStartSetup for "ha-381619-m03" (driver="kvm2")
	I1028 17:27:04.712128   32020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:27:04.712149   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.712379   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:27:04.712408   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.714536   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.714835   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.714862   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.715020   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.715209   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.715341   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.715464   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.799357   32020 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:27:04.803701   32020 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:27:04.803723   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:27:04.803779   32020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:27:04.803846   32020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:27:04.803856   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:27:04.803932   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:27:04.813520   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:04.836571   32020 start.go:296] duration metric: took 124.443928ms for postStartSetup
	I1028 17:27:04.836615   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetConfigRaw
	I1028 17:27:04.837172   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.839735   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840084   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.840105   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.840305   32020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:27:04.840512   32020 start.go:128] duration metric: took 27.767033157s to createHost
	I1028 17:27:04.840535   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.842741   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843075   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.843096   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.843188   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.843354   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843499   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.843648   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.843814   32020 main.go:141] libmachine: Using SSH client type: native
	I1028 17:27:04.843957   32020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1028 17:27:04.843967   32020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:27:04.948925   32020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136424.929789330
	
	I1028 17:27:04.948945   32020 fix.go:216] guest clock: 1730136424.929789330
	I1028 17:27:04.948951   32020 fix.go:229] Guest: 2024-10-28 17:27:04.92978933 +0000 UTC Remote: 2024-10-28 17:27:04.840524406 +0000 UTC m=+152.171492636 (delta=89.264924ms)
	I1028 17:27:04.948966   32020 fix.go:200] guest clock delta is within tolerance: 89.264924ms
	I1028 17:27:04.948971   32020 start.go:83] releasing machines lock for "ha-381619-m03", held for 27.875595959s
	I1028 17:27:04.948986   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.949230   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:04.952087   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.952552   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.952580   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.954772   32020 out.go:177] * Found network options:
	I1028 17:27:04.956124   32020 out.go:177]   - NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:04.957329   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957826   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.957978   32020 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:27:04.958075   32020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:27:04.958124   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.958183   32020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:27:04.958205   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:27:04.960811   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961141   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961168   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961186   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961307   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961462   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.961599   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.961617   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:04.961637   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:04.961711   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:04.961806   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:27:04.961908   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:27:04.962057   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:27:04.962208   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:27:05.194026   32020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:27:05.201042   32020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:27:05.201105   32020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:27:05.217646   32020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 17:27:05.217662   32020 start.go:495] detecting cgroup driver to use...
	I1028 17:27:05.217711   32020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:27:05.236089   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:27:05.251712   32020 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:27:05.251757   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:27:05.266922   32020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:27:05.282192   32020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:27:05.400766   32020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:27:05.540458   32020 docker.go:233] disabling docker service ...
	I1028 17:27:05.540536   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:27:05.554384   32020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:27:05.566632   32020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:27:05.704365   32020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:27:05.814298   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:27:05.832161   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:27:05.850391   32020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:27:05.850440   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.860158   32020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:27:05.860214   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.870182   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.880040   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.890188   32020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:27:05.901036   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.911295   32020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.928814   32020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:27:05.939099   32020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:27:05.949052   32020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 17:27:05.949107   32020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 17:27:05.961188   32020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:27:05.970308   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:06.082126   32020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:27:06.186312   32020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:27:06.186399   32020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:27:06.191449   32020 start.go:563] Will wait 60s for crictl version
	I1028 17:27:06.191503   32020 ssh_runner.go:195] Run: which crictl
	I1028 17:27:06.195251   32020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:27:06.231675   32020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:27:06.231743   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.263999   32020 ssh_runner.go:195] Run: crio --version
	I1028 17:27:06.295360   32020 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:27:06.296610   32020 out.go:177]   - env NO_PROXY=192.168.39.230
	I1028 17:27:06.297916   32020 out.go:177]   - env NO_PROXY=192.168.39.230,192.168.39.171
	I1028 17:27:06.299066   32020 main.go:141] libmachine: (ha-381619-m03) Calling .GetIP
	I1028 17:27:06.302357   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.302805   32020 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:27:06.302853   32020 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:27:06.303125   32020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:27:06.307684   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:06.322487   32020 mustload.go:65] Loading cluster: ha-381619
	I1028 17:27:06.322674   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:06.322921   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.322954   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.337329   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I1028 17:27:06.337793   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.338350   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.338369   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.338643   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.338806   32020 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:27:06.340173   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:06.340491   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:06.340528   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:06.354028   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I1028 17:27:06.354441   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:06.354853   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:06.354871   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:06.355207   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:06.355398   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:06.355555   32020 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.17
	I1028 17:27:06.355568   32020 certs.go:194] generating shared ca certs ...
	I1028 17:27:06.355587   32020 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.355706   32020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:27:06.355743   32020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:27:06.355752   32020 certs.go:256] generating profile certs ...
	I1028 17:27:06.355818   32020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:27:06.355840   32020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131
	I1028 17:27:06.355854   32020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.17 192.168.39.254]
	I1028 17:27:06.615352   32020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 ...
	I1028 17:27:06.615384   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131: {Name:mk30b1e5a01615c193463ae31058813eb757a15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615571   32020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 ...
	I1028 17:27:06.615587   32020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131: {Name:mkc1142cb1e41a27aeb0597e6f743604179f8b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:27:06.615684   32020 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:27:06.615844   32020 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.ea12f131 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:27:06.616012   32020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:27:06.616031   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:27:06.616048   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:27:06.616067   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:27:06.616091   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:27:06.616107   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:27:06.616121   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:27:06.616138   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:27:06.632549   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:27:06.632628   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:27:06.632669   32020 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:27:06.632680   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:27:06.632702   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:27:06.632732   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:27:06.632764   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:27:06.632808   32020 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:27:06.632854   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:27:06.632879   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:06.632897   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:27:06.632955   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:06.635620   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.635992   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:06.636039   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:06.636203   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:06.636373   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:06.636547   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:06.636691   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:06.708743   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 17:27:06.714395   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 17:27:06.725274   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 17:27:06.729452   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 17:27:06.739682   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 17:27:06.743778   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 17:27:06.753533   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 17:27:06.757406   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1028 17:27:06.768515   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 17:27:06.772684   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 17:27:06.783594   32020 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 17:27:06.788182   32020 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 17:27:06.798917   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:27:06.824680   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:27:06.848168   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:27:06.870934   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:27:06.894622   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 17:27:06.916995   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 17:27:06.939854   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:27:06.962079   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:27:06.985176   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:27:07.007959   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:27:07.031196   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:27:07.054116   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 17:27:07.071809   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 17:27:07.087821   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 17:27:07.105114   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1028 17:27:07.121456   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 17:27:07.137929   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 17:27:07.153936   32020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 17:27:07.169928   32020 ssh_runner.go:195] Run: openssl version
	I1028 17:27:07.176125   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:27:07.186611   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191749   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.191791   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:27:07.197474   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:27:07.208145   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:27:07.219642   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224041   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.224081   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:27:07.229665   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:27:07.240477   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:27:07.251279   32020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255404   32020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.255446   32020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:27:07.260823   32020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:27:07.271234   32020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:27:07.275094   32020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 17:27:07.275142   32020 kubeadm.go:934] updating node {m03 192.168.39.17 8443 v1.31.2 crio true true} ...
	I1028 17:27:07.275277   32020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:27:07.275318   32020 kube-vip.go:115] generating kube-vip config ...
	I1028 17:27:07.275356   32020 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:27:07.290975   32020 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:27:07.291032   32020 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:27:07.291070   32020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.301885   32020 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 17:27:07.301930   32020 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 17:27:07.312754   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 17:27:07.312779   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312836   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 17:27:07.312864   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312756   32020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 17:27:07.312926   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 17:27:07.312927   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:07.317184   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 17:27:07.317211   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 17:27:07.352999   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 17:27:07.353042   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 17:27:07.353044   32020 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.353130   32020 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 17:27:07.410351   32020 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 17:27:07.410406   32020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 17:27:08.136367   32020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 17:27:08.145689   32020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 17:27:08.162514   32020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:27:08.178802   32020 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:27:08.195002   32020 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:27:08.198953   32020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 17:27:08.210803   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:08.352163   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:08.377094   32020 host.go:66] Checking if "ha-381619" exists ...
	I1028 17:27:08.377585   32020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:27:08.377645   32020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:27:08.394262   32020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I1028 17:27:08.394687   32020 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:27:08.395242   32020 main.go:141] libmachine: Using API Version  1
	I1028 17:27:08.395276   32020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:27:08.395635   32020 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:27:08.395837   32020 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:27:08.396078   32020 start.go:317] joinCluster: &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:27:08.396215   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 17:27:08.396230   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:27:08.399082   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399537   32020 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:27:08.399566   32020 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:27:08.399713   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:27:08.399904   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:27:08.400043   32020 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:27:08.400171   32020 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:27:08.552541   32020 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:08.552592   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I1028 17:27:30.870343   32020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mq1yj0.88qkgi523axtbdw2 --discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-381619-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (22.317699091s)
	I1028 17:27:30.870408   32020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 17:27:31.352565   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-381619-m03 minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=ha-381619 minikube.k8s.io/primary=false
	I1028 17:27:31.535264   32020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-381619-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 17:27:31.653788   32020 start.go:319] duration metric: took 23.257712014s to joinCluster
	I1028 17:27:31.653906   32020 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 17:27:31.654293   32020 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:27:31.655305   32020 out.go:177] * Verifying Kubernetes components...
	I1028 17:27:31.656854   32020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:27:31.931462   32020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:27:32.007668   32020 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:27:32.008012   32020 kapi.go:59] client config for ha-381619: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 17:27:32.008099   32020 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.230:8443
	I1028 17:27:32.008418   32020 node_ready.go:35] waiting up to 6m0s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:32.008555   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.008568   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.008580   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.008590   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.012013   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:32.509493   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:32.509514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:32.509522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:32.509526   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:32.512995   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:33.008792   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.008813   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.008823   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.008831   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.013277   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:33.509021   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:33.509043   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:33.509053   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:33.509059   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:33.512568   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.009494   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.009514   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.009522   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.009525   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.012872   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:34.013477   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:34.508671   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:34.508698   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:34.508711   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:34.508717   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:34.511657   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.009518   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.009538   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.009546   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.009549   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.012353   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:35.509512   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:35.509539   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:35.509551   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:35.509564   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:35.513144   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.009477   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.009496   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.009503   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.009508   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.012424   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:36.509250   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:36.509279   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:36.509290   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:36.509295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:36.512794   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:36.513405   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:37.008636   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.008657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.008668   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.008676   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.011455   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:37.509093   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:37.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:37.509123   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:37.509127   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:37.512558   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.009185   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.009214   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.009222   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.009226   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.012314   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:38.508924   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:38.508943   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:38.508951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:38.508955   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:38.511947   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.008656   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.008679   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.008691   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.008698   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.011261   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:39.011779   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:39.509251   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:39.509272   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:39.509279   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:39.509283   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:39.512371   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:40.009266   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.009299   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.013354   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:40.509289   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:40.509307   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:40.509315   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:40.509320   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:40.512591   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:41.009123   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.009146   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.009163   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.014310   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:41.014943   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:41.509077   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:41.509115   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:41.509126   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:41.509134   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:41.512425   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.008587   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.008609   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.008621   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.008627   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.012270   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:42.509586   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:42.509607   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:42.509615   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:42.509621   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:42.512638   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:43.009220   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.009238   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.009248   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.009256   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.012180   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.508622   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:43.508646   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:43.508656   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:43.508660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:43.511470   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:43.512019   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:44.009130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.009150   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.009157   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.009161   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.012525   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:44.509423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:44.509446   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:44.509457   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:44.509462   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:44.513302   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.009198   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.009218   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.009225   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.009230   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.012566   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:45.508621   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:45.508641   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:45.508649   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:45.508652   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:45.511562   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:45.512081   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:46.008747   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.008770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.008778   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.008782   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.011847   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:46.509246   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:46.509269   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:46.509277   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:46.509281   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:46.512939   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:47.008680   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.008703   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.008713   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.008719   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.030138   32020 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1028 17:27:47.508630   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:47.508650   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:47.508657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:47.508663   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:47.514479   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:47.515054   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:48.008911   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.008931   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.008940   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.008944   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.012001   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:48.509098   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:48.509121   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:48.509132   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:48.509138   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:48.512351   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.008615   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.008635   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.008643   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.008647   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.011780   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:49.508700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:49.508723   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:49.508731   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:49.508735   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:49.511993   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.008627   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.008648   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.008657   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.008660   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.012285   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:50.012911   32020 node_ready.go:53] node "ha-381619-m03" has status "Ready":"False"
	I1028 17:27:50.509280   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:50.509301   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:50.509309   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:50.509321   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:50.512855   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.009269   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.009287   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.009295   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.009303   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.012097   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.509273   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.509293   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.509304   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.509309   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.512305   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.513072   32020 node_ready.go:49] node "ha-381619-m03" has status "Ready":"True"
	I1028 17:27:51.513099   32020 node_ready.go:38] duration metric: took 19.504662706s for node "ha-381619-m03" to be "Ready" ...
	I1028 17:27:51.513110   32020 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:51.513182   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:51.513193   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.513203   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.513209   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.518727   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.525983   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.526072   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6lp7c
	I1028 17:27:51.526088   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.526100   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.526111   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.531963   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:51.532739   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.532753   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.532761   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.532764   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.535083   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.535631   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.535649   32020 pod_ready.go:82] duration metric: took 9.646144ms for pod "coredns-7c65d6cfc9-6lp7c" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535657   32020 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.535700   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtmvl
	I1028 17:27:51.535707   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.535714   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.535721   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.538224   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.538964   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.538979   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.538986   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.538990   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.541964   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.542349   32020 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.542364   32020 pod_ready.go:82] duration metric: took 6.701109ms for pod "coredns-7c65d6cfc9-mtmvl" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542375   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.542424   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619
	I1028 17:27:51.542434   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.542441   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.542447   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.544839   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.545361   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:51.545376   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.545385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.545392   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.547384   32020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 17:27:51.547876   32020 pod_ready.go:93] pod "etcd-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.547890   32020 pod_ready.go:82] duration metric: took 5.50604ms for pod "etcd-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547898   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.547937   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m02
	I1028 17:27:51.547944   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.547951   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.547954   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.549977   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.550423   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:51.550435   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.550442   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.550445   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.552459   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:51.553082   32020 pod_ready.go:93] pod "etcd-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.553099   32020 pod_ready.go:82] duration metric: took 5.194272ms for pod "etcd-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.553110   32020 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.709397   32020 request.go:632] Waited for 156.217787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709446   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/etcd-ha-381619-m03
	I1028 17:27:51.709451   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.709458   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.709461   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.712548   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:51.909629   32020 request.go:632] Waited for 196.367534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909684   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:51.909689   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:51.909700   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:51.909708   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:51.918132   32020 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 17:27:51.918809   32020 pod_ready.go:93] pod "etcd-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:51.918828   32020 pod_ready.go:82] duration metric: took 365.711465ms for pod "etcd-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:51.918850   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.109303   32020 request.go:632] Waited for 190.370368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109365   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619
	I1028 17:27:52.109373   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.109383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.109388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.112392   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.309408   32020 request.go:632] Waited for 196.27481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309460   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:52.309464   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.309471   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.309475   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.312195   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:52.312752   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.312777   32020 pod_ready.go:82] duration metric: took 393.917667ms for pod "kube-apiserver-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.312791   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.509760   32020 request.go:632] Waited for 196.900981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509849   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m02
	I1028 17:27:52.509861   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.509872   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.509878   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.513709   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.709720   32020 request.go:632] Waited for 195.19818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709771   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:52.709777   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.709784   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.709789   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.712910   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:52.713496   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:52.713513   32020 pod_ready.go:82] duration metric: took 400.71419ms for pod "kube-apiserver-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.713525   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:52.910080   32020 request.go:632] Waited for 196.490754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910131   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-381619-m03
	I1028 17:27:52.910138   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:52.910148   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:52.910155   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:52.913570   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.109611   32020 request.go:632] Waited for 195.067242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109675   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:53.109680   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.109688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.109692   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.112419   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:53.113243   32020 pod_ready.go:93] pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.113258   32020 pod_ready.go:82] duration metric: took 399.726328ms for pod "kube-apiserver-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.113269   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.309322   32020 request.go:632] Waited for 195.985489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309373   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619
	I1028 17:27:53.309378   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.309385   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.309389   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.312514   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.509641   32020 request.go:632] Waited for 196.355986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509756   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:53.509770   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.509788   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.509809   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.513067   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.513631   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.513648   32020 pod_ready.go:82] duration metric: took 400.372385ms for pod "kube-controller-manager-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.513660   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.709756   32020 request.go:632] Waited for 196.030975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709821   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m02
	I1028 17:27:53.709829   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.709838   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.709847   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.713250   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.910289   32020 request.go:632] Waited for 196.241506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910347   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:53.910352   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:53.910360   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:53.910365   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:53.913501   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:53.914111   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:53.914128   32020 pod_ready.go:82] duration metric: took 400.460847ms for pod "kube-controller-manager-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:53.914138   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.110262   32020 request.go:632] Waited for 196.057341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110321   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-381619-m03
	I1028 17:27:54.110328   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.110338   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.110344   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.113686   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.309625   32020 request.go:632] Waited for 195.198525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309696   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.309704   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.309715   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.309724   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.312970   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.313530   32020 pod_ready.go:93] pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.313550   32020 pod_ready.go:82] duration metric: took 399.405564ms for pod "kube-controller-manager-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.313561   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.509582   32020 request.go:632] Waited for 195.958227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509651   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2z74r
	I1028 17:27:54.509657   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.509664   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.509669   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.513356   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.709469   32020 request.go:632] Waited for 195.28008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709541   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:54.709547   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.709555   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.709562   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.712778   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:54.713684   32020 pod_ready.go:93] pod "kube-proxy-2z74r" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:54.713706   32020 pod_ready.go:82] duration metric: took 400.138051ms for pod "kube-proxy-2z74r" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.713722   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:54.909768   32020 request.go:632] Waited for 195.979649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909859   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqdtj
	I1028 17:27:54.909871   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:54.909882   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:54.909893   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:54.912982   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.110064   32020 request.go:632] Waited for 196.359608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110130   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.110135   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.110142   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.110148   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.113297   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.113778   32020 pod_ready.go:93] pod "kube-proxy-mqdtj" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.113796   32020 pod_ready.go:82] duration metric: took 400.063804ms for pod "kube-proxy-mqdtj" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.113805   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.309960   32020 request.go:632] Waited for 196.087241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310011   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrfgq
	I1028 17:27:55.310017   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.310027   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.310040   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.313630   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.509848   32020 request.go:632] Waited for 195.356609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509902   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:55.509907   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.509917   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.509922   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.513283   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.513872   32020 pod_ready.go:93] pod "kube-proxy-nrfgq" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.513891   32020 pod_ready.go:82] duration metric: took 400.079859ms for pod "kube-proxy-nrfgq" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.513903   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.709489   32020 request.go:632] Waited for 195.521691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709543   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619
	I1028 17:27:55.709558   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.709582   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.709589   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.713346   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.910316   32020 request.go:632] Waited for 196.337736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910371   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619
	I1028 17:27:55.910375   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:55.910383   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:55.910388   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:55.913484   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:55.914099   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:55.914115   32020 pod_ready.go:82] duration metric: took 400.201992ms for pod "kube-scheduler-ha-381619" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:55.914124   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.110258   32020 request.go:632] Waited for 196.039546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110326   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m02
	I1028 17:27:56.110331   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.110337   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.110342   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.113332   32020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 17:27:56.310263   32020 request.go:632] Waited for 196.319737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310334   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m02
	I1028 17:27:56.310355   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.310370   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.310379   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.313786   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.314505   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.314532   32020 pod_ready.go:82] duration metric: took 400.399291ms for pod "kube-scheduler-ha-381619-m02" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.314546   32020 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.510327   32020 request.go:632] Waited for 195.699418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510378   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-381619-m03
	I1028 17:27:56.510383   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.510390   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.510394   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.513464   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.709328   32020 request.go:632] Waited for 195.274185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709385   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes/ha-381619-m03
	I1028 17:27:56.709391   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.709398   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.709403   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.712740   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:56.713420   32020 pod_ready.go:93] pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 17:27:56.713436   32020 pod_ready.go:82] duration metric: took 398.882403ms for pod "kube-scheduler-ha-381619-m03" in "kube-system" namespace to be "Ready" ...
	I1028 17:27:56.713446   32020 pod_ready.go:39] duration metric: took 5.200325366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 17:27:56.713469   32020 api_server.go:52] waiting for apiserver process to appear ...
	I1028 17:27:56.713519   32020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:27:56.729002   32020 api_server.go:72] duration metric: took 25.075050157s to wait for apiserver process to appear ...
	I1028 17:27:56.729025   32020 api_server.go:88] waiting for apiserver healthz status ...
	I1028 17:27:56.729051   32020 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 17:27:56.734141   32020 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 17:27:56.734212   32020 round_trippers.go:463] GET https://192.168.39.230:8443/version
	I1028 17:27:56.734223   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.734234   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.734242   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.735154   32020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 17:27:56.735212   32020 api_server.go:141] control plane version: v1.31.2
	I1028 17:27:56.735228   32020 api_server.go:131] duration metric: took 6.196303ms to wait for apiserver health ...
	I1028 17:27:56.735237   32020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 17:27:56.909657   32020 request.go:632] Waited for 174.332812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909707   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:56.909712   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:56.909720   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:56.909725   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:56.915545   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:56.922175   32020 system_pods.go:59] 24 kube-system pods found
	I1028 17:27:56.922215   32020 system_pods.go:61] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:56.922225   32020 system_pods.go:61] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:56.922230   32020 system_pods.go:61] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:56.922235   32020 system_pods.go:61] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:56.922240   32020 system_pods.go:61] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:56.922248   32020 system_pods.go:61] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:56.922253   32020 system_pods.go:61] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:56.922259   32020 system_pods.go:61] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:56.922267   32020 system_pods.go:61] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:56.922273   32020 system_pods.go:61] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:56.922281   32020 system_pods.go:61] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:56.922288   32020 system_pods.go:61] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:56.922294   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:56.922302   32020 system_pods.go:61] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:56.922308   32020 system_pods.go:61] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:56.922317   32020 system_pods.go:61] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:56.922327   32020 system_pods.go:61] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:56.922335   32020 system_pods.go:61] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:56.922341   32020 system_pods.go:61] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:56.922348   32020 system_pods.go:61] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:56.922352   32020 system_pods.go:61] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:56.922355   32020 system_pods.go:61] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:56.922361   32020 system_pods.go:61] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:56.922364   32020 system_pods.go:61] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:56.922369   32020 system_pods.go:74] duration metric: took 187.124012ms to wait for pod list to return data ...
	I1028 17:27:56.922378   32020 default_sa.go:34] waiting for default service account to be created ...
	I1028 17:27:57.109949   32020 request.go:632] Waited for 187.506133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110004   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/default/serviceaccounts
	I1028 17:27:57.110012   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.110022   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.110033   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.113502   32020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 17:27:57.113628   32020 default_sa.go:45] found service account: "default"
	I1028 17:27:57.113645   32020 default_sa.go:55] duration metric: took 191.260682ms for default service account to be created ...
	I1028 17:27:57.113656   32020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 17:27:57.309925   32020 request.go:632] Waited for 196.205305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310024   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/namespaces/kube-system/pods
	I1028 17:27:57.310036   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.310047   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.310053   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.315888   32020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 17:27:57.322856   32020 system_pods.go:86] 24 kube-system pods found
	I1028 17:27:57.322880   32020 system_pods.go:89] "coredns-7c65d6cfc9-6lp7c" [061190be-cf6d-4f7e-b201-92ae719bca38] Running
	I1028 17:27:57.322886   32020 system_pods.go:89] "coredns-7c65d6cfc9-mtmvl" [8d504fd6-a26b-43ec-b258-451c15a3e859] Running
	I1028 17:27:57.322890   32020 system_pods.go:89] "etcd-ha-381619" [f791ec3e-971a-4a7e-91cc-89c9076b0287] Running
	I1028 17:27:57.322893   32020 system_pods.go:89] "etcd-ha-381619-m02" [de41d7c1-70d4-4e14-8ca1-9591d33f09f2] Running
	I1028 17:27:57.322897   32020 system_pods.go:89] "etcd-ha-381619-m03" [f74b1d73-786b-4806-9608-24d397f0c764] Running
	I1028 17:27:57.322900   32020 system_pods.go:89] "kindnet-2ggdz" [2083bd85-a172-493e-ad5d-fbad874f5d86] Running
	I1028 17:27:57.322904   32020 system_pods.go:89] "kindnet-82dqn" [c4d9a56e-9b9a-41e4-8e98-d3be1576fcbf] Running
	I1028 17:27:57.322907   32020 system_pods.go:89] "kindnet-vj9vj" [cef92207-2f62-4d72-baf1-e9010ed565c1] Running
	I1028 17:27:57.322918   32020 system_pods.go:89] "kube-apiserver-ha-381619" [dda13107-a223-40e1-afd8-0f9f47434f1a] Running
	I1028 17:27:57.322927   32020 system_pods.go:89] "kube-apiserver-ha-381619-m02" [f92254e8-a4d5-4c58-a795-272f9d929848] Running
	I1028 17:27:57.322932   32020 system_pods.go:89] "kube-apiserver-ha-381619-m03" [497e1667-9545-4af5-9ad7-f569fcf5f7ff] Running
	I1028 17:27:57.322940   32020 system_pods.go:89] "kube-controller-manager-ha-381619" [b5fd62a5-c4fd-4247-a8d8-312740b64934] Running
	I1028 17:27:57.322946   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m02" [5a8aae2c-46ba-4d86-9890-64f055e1792c] Running
	I1028 17:27:57.322951   32020 system_pods.go:89] "kube-controller-manager-ha-381619-m03" [d2ac5d7a-6147-4f40-82c2-88084c01b3b7] Running
	I1028 17:27:57.322958   32020 system_pods.go:89] "kube-proxy-2z74r" [98756d8c-b3cf-4839-b28a-ae144afb1836] Running
	I1028 17:27:57.322966   32020 system_pods.go:89] "kube-proxy-mqdtj" [b146a3fd-b1a4-49f6-9711-96737c4a3757] Running
	I1028 17:27:57.322971   32020 system_pods.go:89] "kube-proxy-nrfgq" [23543f1a-4f95-4cbd-b084-0c30f8167b79] Running
	I1028 17:27:57.322978   32020 system_pods.go:89] "kube-scheduler-ha-381619" [0bf1190f-da89-49c2-91e7-8c57424e215e] Running
	I1028 17:27:57.322986   32020 system_pods.go:89] "kube-scheduler-ha-381619-m02" [ad72546f-b54d-4f32-8976-e0244872be00] Running
	I1028 17:27:57.322991   32020 system_pods.go:89] "kube-scheduler-ha-381619-m03" [0b970742-a09a-41e6-97b7-1e5ec97be097] Running
	I1028 17:27:57.322999   32020 system_pods.go:89] "kube-vip-ha-381619" [ab8f53e1-383c-4f92-9ebf-67d8fb69b47c] Running
	I1028 17:27:57.323006   32020 system_pods.go:89] "kube-vip-ha-381619-m02" [4acf4651-fe7c-4e27-a607-c906edc1352e] Running
	I1028 17:27:57.323011   32020 system_pods.go:89] "kube-vip-ha-381619-m03" [7bc6ac65-c33b-48a9-9f1c-30bbfaac21f2] Running
	I1028 17:27:57.323018   32020 system_pods.go:89] "storage-provisioner" [0456ff02-2c23-423a-8010-1556d1e6dfac] Running
	I1028 17:27:57.323027   32020 system_pods.go:126] duration metric: took 209.364489ms to wait for k8s-apps to be running ...
	I1028 17:27:57.323045   32020 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 17:27:57.323123   32020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:27:57.338248   32020 system_svc.go:56] duration metric: took 15.198158ms WaitForService to wait for kubelet
	I1028 17:27:57.338268   32020 kubeadm.go:582] duration metric: took 25.684324158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:27:57.338294   32020 node_conditions.go:102] verifying NodePressure condition ...
	I1028 17:27:57.509596   32020 request.go:632] Waited for 171.215252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509662   32020 round_trippers.go:463] GET https://192.168.39.230:8443/api/v1/nodes
	I1028 17:27:57.509677   32020 round_trippers.go:469] Request Headers:
	I1028 17:27:57.509688   32020 round_trippers.go:473]     Accept: application/json, */*
	I1028 17:27:57.509699   32020 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 17:27:57.514522   32020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 17:27:57.515701   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515733   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515769   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515779   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515785   32020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 17:27:57.515800   32020 node_conditions.go:123] node cpu capacity is 2
	I1028 17:27:57.515810   32020 node_conditions.go:105] duration metric: took 177.507704ms to run NodePressure ...
	I1028 17:27:57.515829   32020 start.go:241] waiting for startup goroutines ...
	I1028 17:27:57.515863   32020 start.go:255] writing updated cluster config ...
	I1028 17:27:57.516171   32020 ssh_runner.go:195] Run: rm -f paused
	I1028 17:27:57.567306   32020 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 17:27:57.569290   32020 out.go:177] * Done! kubectl is now configured to use "ha-381619" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.642446142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136722642421468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4c89ca7-0246-4c8b-892e-0eaaba2f134a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.643670285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e801e3f-6af1-4829-aff4-c80253b0724f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.643740910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e801e3f-6af1-4829-aff4-c80253b0724f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.644004041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e801e3f-6af1-4829-aff4-c80253b0724f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.680631808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e13b48a-8aeb-4511-bed0-98c582fd9ef5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.680702668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e13b48a-8aeb-4511-bed0-98c582fd9ef5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.682514602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77e290a4-eb8e-4030-b6e3-96480181b886 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.682924206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136722682902398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77e290a4-eb8e-4030-b6e3-96480181b886 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.683362019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a910984e-4c50-4372-bedb-5a391dc76863 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.683436583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a910984e-4c50-4372-bedb-5a391dc76863 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.683644081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a910984e-4c50-4372-bedb-5a391dc76863 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.722375075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=541aa1af-aa5c-4c38-9b83-0f95ba8da03d name=/runtime.v1.RuntimeService/Version
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.722466962Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=541aa1af-aa5c-4c38-9b83-0f95ba8da03d name=/runtime.v1.RuntimeService/Version
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.724805373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59dee67e-2e65-4826-86f6-b78777a19a20 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.725556958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136722725533156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59dee67e-2e65-4826-86f6-b78777a19a20 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.726291019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=798ae5fc-12eb-4b7d-bb3d-95fc50cf0fd5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.726361058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=798ae5fc-12eb-4b7d-bb3d-95fc50cf0fd5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.726617292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=798ae5fc-12eb-4b7d-bb3d-95fc50cf0fd5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.768939218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f6f6897-a16e-4d26-80d9-bf2e3ba9f6d5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.769032159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f6f6897-a16e-4d26-80d9-bf2e3ba9f6d5 name=/runtime.v1.RuntimeService/Version
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.769956829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70d93fbc-d0db-4264-9d2c-89e278dcb9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.770339872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136722770315308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70d93fbc-d0db-4264-9d2c-89e278dcb9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.770913892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24f426e5-69a8-4727-8778-30796a9ea572 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.770981832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24f426e5-69a8-4727-8778-30796a9ea572 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 17:32:02 ha-381619 crio[660]: time="2024-10-28 17:32:02.771186134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30,PodSandboxId:32dd7ef5c8db8b9b674edbc571047b578b0e2f3d71398eed906ebeb95eebe55f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331443543194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtmvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d504fd6-a26b-43ec-b258-451c15a3e859,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f,PodSandboxId:a8d9ef07a9de9b9002102c2d33493c307173ac523113ab6354d5601805475ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730136331442902841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6lp7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 061190be-cf6d-4f7e-b201-92ae719bca38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b25385ac6d6a90be28f312ef27698c14de0d8d8aebe666d5ebf7c1bbe4cf36,PodSandboxId:cdf8a7008daaa2b2cb55acc0731608ad7bf549065cd7410ccb86f7646f22cef8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1730136331351168571,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0456ff02-2c23-423a-8010-1556d1e6dfac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3,PodSandboxId:ec93f4cb498de7715cacd4611d155be4b972ec9cca650e29aa58dd841106b9ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:1730136
318816975031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vj9vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef92207-2f62-4d72-baf1-e9010ed565c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa,PodSandboxId:31e8db8e135614fa4a6262893bec8c2200023b9fd431ecc85c304837fabcf973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730136318522443741,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqdtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b146a3fd-b1a4-49f6-9711-96737c4a3757,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8820dc5a1a258a06f95e99bf102490574fb81a9f12c82932543371fdb565be12,PodSandboxId:0440b646716622ab7f026f90250a2167d5548e21afb218d51a8f2790ea1e5269,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730136311708281679,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec83820abe46635a1506eebb7b37687,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8,PodSandboxId:75b5ea16f2e6b491e92cbebea482d930ed9f0e42eca27f75438ee424bfc1f021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730136307941333251,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99aada2e47224c0ab170fe94d2522169,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9,PodSandboxId:2d476f176dee346550f51e572d9d2d0e57a615e57363921d43bf4ed40b493ee5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730136307934006040,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a158ca3c640657c77f7a0e4aa7b1e1a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37,PodSandboxId:8535275eaad56deb3059ead80933a145d7e5b9edc91be88a24d7748f47024988,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730136307942285274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-381619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662de4399b331ad11cfb03bbc1b4d764,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b,PodSandboxId:2c5f11da0112e9d0c7f4ff45799ec2a9992d5d69f716c99f23c70fc399a7f139,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730136307843854963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-381619,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b89a388aad392b7c49123c2b4319e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24f426e5-69a8-4727-8778-30796a9ea572 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb3c00b93a7e6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   32dd7ef5c8db8       coredns-7c65d6cfc9-mtmvl
	439a12fd4f2e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                    6 minutes ago       Running             coredns                   0                   a8d9ef07a9de9       coredns-7c65d6cfc9-6lp7c
	32b25385ac6d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    6 minutes ago       Running             storage-provisioner       0                   cdf8a7008daaa       storage-provisioner
	02eaa5b848022       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                    6 minutes ago       Running             kindnet-cni               0                   ec93f4cb498de       kindnet-vj9vj
	4c2af4b0e8f70       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                    6 minutes ago       Running             kube-proxy                0                   31e8db8e13561       kube-proxy-mqdtj
	8820dc5a1a258       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215   6 minutes ago       Running             kube-vip                  0                   0440b64671662       kube-vip-ha-381619
	a2a4ad9e37b9c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                    6 minutes ago       Running             kube-apiserver            0                   8535275eaad56       kube-apiserver-ha-381619
	c4311ab52a438       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                    6 minutes ago       Running             kube-controller-manager   0                   75b5ea16f2e6b       kube-controller-manager-ha-381619
	5d299a6ffacac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                    6 minutes ago       Running             etcd                      0                   2d476f176dee3       etcd-ha-381619
	8f6c077dbde89       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                    6 minutes ago       Running             kube-scheduler            0                   2c5f11da0112e       kube-scheduler-ha-381619
	
	
	==> coredns [439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f] <==
	[INFO] 10.244.2.2:53226 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001368106s
	[INFO] 10.244.2.2:36312 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118066s
	[INFO] 10.244.1.2:38518 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000323292s
	[INFO] 10.244.1.2:47890 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000118239s
	[INFO] 10.244.1.2:45070 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000130482s
	[INFO] 10.244.1.2:39687 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001925125s
	[INFO] 10.244.2.3:53812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151587s
	[INFO] 10.244.2.3:54592 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180193s
	[INFO] 10.244.2.3:46470 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138925s
	[INFO] 10.244.2.2:48981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776352s
	[INFO] 10.244.2.2:35249 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131241s
	[INFO] 10.244.2.2:53917 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177037s
	[INFO] 10.244.2.2:34049 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001120542s
	[INFO] 10.244.1.2:35278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111663s
	[INFO] 10.244.1.2:37962 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106563s
	[INFO] 10.244.1.2:40545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246646s
	[INFO] 10.244.1.2:40814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215904s
	[INFO] 10.244.2.3:49806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000229773s
	[INFO] 10.244.2.2:44763 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117588s
	[INFO] 10.244.2.3:48756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125652s
	[INFO] 10.244.2.3:41328 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177165s
	[INFO] 10.244.2.3:35650 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137462s
	[INFO] 10.244.2.2:60478 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163829s
	[INFO] 10.244.2.2:51252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106643s
	[INFO] 10.244.1.2:56942 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137828s
	
	
	==> coredns [fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30] <==
	[INFO] 10.244.2.3:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131477s
	[INFO] 10.244.2.2:46692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196624s
	[INFO] 10.244.2.2:38402 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226272s
	[INFO] 10.244.2.2:34845 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153045s
	[INFO] 10.244.2.2:49870 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121016s
	[INFO] 10.244.1.2:51535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001893779s
	[INFO] 10.244.1.2:36412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109955s
	[INFO] 10.244.1.2:53434 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000734s
	[INFO] 10.244.1.2:38007 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101464s
	[INFO] 10.244.2.3:39546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.2.3:49299 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158392s
	[INFO] 10.244.2.3:42607 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102312s
	[INFO] 10.244.2.2:36855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150344s
	[INFO] 10.244.2.2:46374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00016867s
	[INFO] 10.244.2.2:37275 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112218s
	[INFO] 10.244.1.2:41523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017259s
	[INFO] 10.244.1.2:43696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000347465s
	[INFO] 10.244.1.2:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161099s
	[INFO] 10.244.1.2:59192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118287s
	[INFO] 10.244.2.3:42470 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243243s
	[INFO] 10.244.2.2:35932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020307s
	[INFO] 10.244.2.2:39597 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184178s
	[INFO] 10.244.1.2:43973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139891s
	[INFO] 10.244.1.2:41644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171411s
	[INFO] 10.244.1.2:47984 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086921s
	
	
	==> describe nodes <==
	Name:               ha-381619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T17_25_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:25:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:32:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:30:31 +0000   Mon, 28 Oct 2024 17:25:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-381619
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ff487634ba146ebb8929cc99763c422
	  System UUID:                1ff48763-4ba1-46eb-b892-9cc99763c422
	  Boot ID:                    ce5a7712-d088-475f-80ec-c8b7dee605bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6lp7c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m45s
	  kube-system                 coredns-7c65d6cfc9-mtmvl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m45s
	  kube-system                 etcd-ha-381619                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m50s
	  kube-system                 kindnet-vj9vj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m45s
	  kube-system                 kube-apiserver-ha-381619             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 kube-controller-manager-ha-381619    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 kube-proxy-mqdtj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-scheduler-ha-381619             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 kube-vip-ha-381619                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m44s                  kube-proxy       
	  Normal  Starting                 6m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m56s (x7 over 6m56s)  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m56s (x8 over 6m56s)  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m56s (x8 over 6m56s)  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m49s                  kubelet          Node ha-381619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m49s                  kubelet          Node ha-381619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m49s                  kubelet          Node ha-381619 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m46s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  NodeReady                6m33s                  kubelet          Node ha-381619 status is now: NodeReady
	  Normal  RegisteredNode           5m46s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-381619 event: Registered Node ha-381619 in Controller
	
	
	Name:               ha-381619-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_26_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:26:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:29:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 17:28:12 +0000   Mon, 28 Oct 2024 17:30:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-381619-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe038bc140e34a24bfa4fe915bd6a83f
	  System UUID:                fe038bc1-40e3-4a24-bfa4-fe915bd6a83f
	  Boot ID:                    2395418c-cd94-4285-8c38-7cd31a1df92a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dxwnw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 etcd-ha-381619-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m53s
	  kube-system                 kindnet-2ggdz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m53s
	  kube-system                 kube-apiserver-ha-381619-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-controller-manager-ha-381619-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-proxy-nrfgq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-scheduler-ha-381619-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-vip-ha-381619-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m49s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m54s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m54s)  kubelet          Node ha-381619-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m54s)  kubelet          Node ha-381619-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m51s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  RegisteredNode           5m46s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeReady                5m31s                  kubelet          Node ha-381619-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-381619-m02 event: Registered Node ha-381619-m02 in Controller
	  Normal  NodeNotReady             117s                   node-controller  Node ha-381619-m02 status is now: NodeNotReady
	
	
	Name:               ha-381619-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_27_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:27:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:28:29 +0000   Mon, 28 Oct 2024 17:27:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-381619-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f056208103704b70bfb827d2e01fcbd6
	  System UUID:                f0562081-0370-4b70-bfb8-27d2e01fcbd6
	  Boot ID:                    3c41c87b-23bb-455f-8665-1ca87b736f8b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-26cg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  default                     busybox-7dff88458-9n6bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 etcd-ha-381619-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m33s
	  kube-system                 kindnet-82dqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m35s
	  kube-system                 kube-apiserver-ha-381619-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-controller-manager-ha-381619-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-proxy-2z74r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-ha-381619-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-vip-ha-381619-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m35s (x8 over 4m35s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s (x8 over 4m35s)  kubelet          Node ha-381619-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s (x7 over 4m35s)  kubelet          Node ha-381619-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-381619-m03 event: Registered Node ha-381619-m03 in Controller
	
	
	Name:               ha-381619-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-381619-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=ha-381619
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T17_28_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 17:28:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-381619-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 17:31:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 17:29:12 +0000   Mon, 28 Oct 2024 17:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-381619-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c794eda5b61f4b51846d119496d6611f
	  System UUID:                c794eda5-b61f-4b51-846d-119496d6611f
	  Boot ID:                    d054e196-c392-4e7e-a1b3-e459ee7974d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzqx2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m21s
	  kube-system                 kube-proxy-7dwhb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet          Node ha-381619-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet          Node ha-381619-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-381619-m04 event: Registered Node ha-381619-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-381619-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 17:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050172] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854623] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.491096] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.570925] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.341236] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.065909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059908] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.181734] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.112783] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.252616] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct28 17:25] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.759910] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.058388] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.418126] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.806365] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +4.131777] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.537990] kauditd_printk_skb: 41 callbacks suppressed
	[  +9.942403] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9] <==
	{"level":"warn","ts":"2024-10-28T17:32:03.036521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.044336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.053641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.058643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.065172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.065851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.069326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.071838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.075849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.077159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.082575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.087939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.090708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.093544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.099244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.114049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.141368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.148161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.156101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.163368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.176189Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.181158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.181951Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.188275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T17:32:03.217433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f4acae94ef986412","from":"f4acae94ef986412","remote-peer-id":"af936484d1d2a2d6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:32:03 up 7 min,  0 users,  load average: 0.06, 0.20, 0.12
	Linux ha-381619 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3] <==
	I1028 17:31:30.296308       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:40.295696       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:40.295776       1 main.go:300] handling current node
	I1028 17:31:40.295795       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:40.295804       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:31:40.296160       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:40.296192       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:40.296331       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:40.296358       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:50.300065       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:31:50.300101       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:31:50.300348       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:31:50.300359       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:31:50.300489       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:31:50.300496       1 main.go:300] handling current node
	I1028 17:31:50.300514       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:31:50.300518       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:32:00.296594       1 main.go:296] Handling node with IPs: map[192.168.39.171:{}]
	I1028 17:32:00.296710       1 main.go:323] Node ha-381619-m02 has CIDR [10.244.1.0/24] 
	I1028 17:32:00.297447       1 main.go:296] Handling node with IPs: map[192.168.39.17:{}]
	I1028 17:32:00.297489       1 main.go:323] Node ha-381619-m03 has CIDR [10.244.2.0/24] 
	I1028 17:32:00.297746       1 main.go:296] Handling node with IPs: map[192.168.39.224:{}]
	I1028 17:32:00.297770       1 main.go:323] Node ha-381619-m04 has CIDR [10.244.3.0/24] 
	I1028 17:32:00.298067       1 main.go:296] Handling node with IPs: map[192.168.39.230:{}]
	I1028 17:32:00.298106       1 main.go:300] handling current node
	
	
	==> kube-apiserver [a2a4ad9e37b9c9d203bd9852110266d7e93d3658c54e927260997745b94b5c37] <==
	W1028 17:25:12.245785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I1028 17:25:12.247133       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 17:25:12.256065       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 17:25:12.326331       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 17:25:13.936309       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 17:25:13.952773       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 17:25:13.968009       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 17:25:17.830466       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1028 17:25:18.077531       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1028 17:28:07.019815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41404: use of closed network connection
	E1028 17:28:07.205390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41420: use of closed network connection
	E1028 17:28:07.386536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41448: use of closed network connection
	E1028 17:28:07.599536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41470: use of closed network connection
	E1028 17:28:07.775264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41490: use of closed network connection
	E1028 17:28:07.949242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41512: use of closed network connection
	E1028 17:28:08.118133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41522: use of closed network connection
	E1028 17:28:08.303400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41550: use of closed network connection
	E1028 17:28:08.475723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41556: use of closed network connection
	E1028 17:28:08.762057       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47594: use of closed network connection
	E1028 17:28:08.944378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47612: use of closed network connection
	E1028 17:28:09.126803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47636: use of closed network connection
	E1028 17:28:09.297149       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47658: use of closed network connection
	E1028 17:28:09.471140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47674: use of closed network connection
	E1028 17:28:09.647026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47704: use of closed network connection
	W1028 17:29:32.257515       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.230]
	
	
	==> kube-controller-manager [c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8] <==
	I1028 17:28:42.026011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.036622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.060198       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.297173       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-381619-m04"
	I1028 17:28:42.386481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.396569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.781672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.951532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:42.966339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:46.926084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:47.034432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:28:52.333791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:04.446682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:29:04.463505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:06.946376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:29:12.658007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m04"
	I1028 17:30:06.972035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:06.972340       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-381619-m04"
	I1028 17:30:06.993167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:07.005350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.940759ms"
	I1028 17:30:07.006727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.8µs"
	I1028 17:30:07.346197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:12.214622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619-m02"
	I1028 17:30:31.329575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-381619"
	
	
	==> kube-proxy [4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 17:25:18.698349       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 17:25:18.711046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E1028 17:25:18.711157       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 17:25:18.745433       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 17:25:18.745462       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 17:25:18.745490       1 server_linux.go:169] "Using iptables Proxier"
	I1028 17:25:18.747834       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 17:25:18.748160       1 server.go:483] "Version info" version="v1.31.2"
	I1028 17:25:18.748312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 17:25:18.749989       1 config.go:199] "Starting service config controller"
	I1028 17:25:18.750071       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 17:25:18.750117       1 config.go:105] "Starting endpoint slice config controller"
	I1028 17:25:18.750134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 17:25:18.750598       1 config.go:328] "Starting node config controller"
	I1028 17:25:18.751738       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 17:25:18.851210       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 17:25:18.851309       1 shared_informer.go:320] Caches are synced for service config
	I1028 17:25:18.852898       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b] <==
	E1028 17:25:11.721217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.842707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 17:25:11.842776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.845287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 17:25:11.848083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 17:25:11.886433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 17:25:11.886602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1028 17:25:14.002937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 17:27:58.460072       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="568dfe45-5437-4cfd-8d20-2fa1e33d8999" pod="default/busybox-7dff88458-9n6bb" assumedNode="ha-381619-m03" currentNode="ha-381619-m02"
	E1028 17:27:58.471238       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m02"
	E1028 17:27:58.471407       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 568dfe45-5437-4cfd-8d20-2fa1e33d8999(default/busybox-7dff88458-9n6bb) was assumed on ha-381619-m02 but assigned to ha-381619-m03" pod="default/busybox-7dff88458-9n6bb"
	E1028 17:27:58.471445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9n6bb\": pod busybox-7dff88458-9n6bb is already assigned to node \"ha-381619-m03\"" pod="default/busybox-7dff88458-9n6bb"
	I1028 17:27:58.471522       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9n6bb" node="ha-381619-m03"
	E1028 17:28:42.093317       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.093832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9291bc3b-2fa3-4a6c-99d3-7bb2a5721b25(kube-system/kindnet-fzqx2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fzqx2"
	E1028 17:28:42.094010       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fzqx2\": pod kindnet-fzqx2 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-fzqx2"
	I1028 17:28:42.094225       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fzqx2" node="ha-381619-m04"
	E1028 17:28:42.149948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.154547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15a36ca9-85be-4b6a-8e4a-31495d13a0c1(kube-system/kube-proxy-7dwhb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7dwhb"
	E1028 17:28:42.156945       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7dwhb\": pod kube-proxy-7dwhb is already assigned to node \"ha-381619-m04\"" pod="kube-system/kube-proxy-7dwhb"
	I1028 17:28:42.157115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7dwhb" node="ha-381619-m04"
	E1028 17:28:42.164640       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	E1028 17:28:42.164715       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 61afb85d-818e-40a2-ad14-87c5f4541d0e(kube-system/kindnet-p6x26) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p6x26"
	E1028 17:28:42.164729       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p6x26\": pod kindnet-p6x26 is already assigned to node \"ha-381619-m04\"" pod="kube-system/kindnet-p6x26"
	I1028 17:28:42.164745       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p6x26" node="ha-381619-m04"
	
	
	==> kubelet <==
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979164    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:23 ha-381619 kubelet[1301]: E1028 17:30:23.979443    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136623978831910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.980958    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:33 ha-381619 kubelet[1301]: E1028 17:30:33.982957    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136633980571352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988254    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:43 ha-381619 kubelet[1301]: E1028 17:30:43.988294    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136643987939382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989574    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:30:53 ha-381619 kubelet[1301]: E1028 17:30:53.989617    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136653989366289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996610    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:03 ha-381619 kubelet[1301]: E1028 17:31:03.996710    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136663993737167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.872137    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 17:31:13 ha-381619 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 17:31:13 ha-381619 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997852    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:13 ha-381619 kubelet[1301]: E1028 17:31:13.997963    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136673997611266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:23.999904    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:24 ha-381619 kubelet[1301]: E1028 17:31:24.000328    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136683999493753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001784    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:34 ha-381619 kubelet[1301]: E1028 17:31:34.001829    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136694001248517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003002    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:44 ha-381619 kubelet[1301]: E1028 17:31:44.003044    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136704002684813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:54 ha-381619 kubelet[1301]: E1028 17:31:54.004348    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136714004119051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 17:31:54 ha-381619 kubelet[1301]: E1028 17:31:54.004369    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730136714004119051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137418,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-381619 -n ha-381619
helpers_test.go:261: (dbg) Run:  kubectl --context ha-381619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (268.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-381619 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-381619 -v=7 --alsologtostderr
E1028 17:33:38.395378   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-381619 -v=7 --alsologtostderr: exit status 82 (2m1.834191884s)

                                                
                                                
-- stdout --
	* Stopping node "ha-381619-m04"  ...
	* Stopping node "ha-381619-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:32:04.244406   37306 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:32:04.244524   37306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:32:04.244533   37306 out.go:358] Setting ErrFile to fd 2...
	I1028 17:32:04.244538   37306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:32:04.244711   37306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:32:04.244906   37306 out.go:352] Setting JSON to false
	I1028 17:32:04.245004   37306 mustload.go:65] Loading cluster: ha-381619
	I1028 17:32:04.245378   37306 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:32:04.245462   37306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:32:04.245628   37306 mustload.go:65] Loading cluster: ha-381619
	I1028 17:32:04.245750   37306 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:32:04.245777   37306 stop.go:39] StopHost: ha-381619-m04
	I1028 17:32:04.246156   37306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:32:04.246212   37306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:32:04.261026   37306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1028 17:32:04.261572   37306 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:32:04.262185   37306 main.go:141] libmachine: Using API Version  1
	I1028 17:32:04.262208   37306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:32:04.262513   37306 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:32:04.264918   37306 out.go:177] * Stopping node "ha-381619-m04"  ...
	I1028 17:32:04.266106   37306 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 17:32:04.266138   37306 main.go:141] libmachine: (ha-381619-m04) Calling .DriverName
	I1028 17:32:04.266360   37306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 17:32:04.266390   37306 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHHostname
	I1028 17:32:04.269197   37306 main.go:141] libmachine: (ha-381619-m04) DBG | domain ha-381619-m04 has defined MAC address 52:54:00:6b:0d:06 in network mk-ha-381619
	I1028 17:32:04.269591   37306 main.go:141] libmachine: (ha-381619-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0d:06", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:28:25 +0000 UTC Type:0 Mac:52:54:00:6b:0d:06 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-381619-m04 Clientid:01:52:54:00:6b:0d:06}
	I1028 17:32:04.269620   37306 main.go:141] libmachine: (ha-381619-m04) DBG | domain ha-381619-m04 has defined IP address 192.168.39.224 and MAC address 52:54:00:6b:0d:06 in network mk-ha-381619
	I1028 17:32:04.269767   37306 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHPort
	I1028 17:32:04.269929   37306 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHKeyPath
	I1028 17:32:04.270087   37306 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHUsername
	I1028 17:32:04.270219   37306 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m04/id_rsa Username:docker}
	I1028 17:32:04.360882   37306 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 17:32:04.414142   37306 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 17:32:04.467402   37306 main.go:141] libmachine: Stopping "ha-381619-m04"...
	I1028 17:32:04.467425   37306 main.go:141] libmachine: (ha-381619-m04) Calling .GetState
	I1028 17:32:04.468737   37306 main.go:141] libmachine: (ha-381619-m04) Calling .Stop
	I1028 17:32:04.472188   37306 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 0/120
	I1028 17:32:05.620098   37306 main.go:141] libmachine: (ha-381619-m04) Calling .GetState
	I1028 17:32:05.621300   37306 main.go:141] libmachine: Machine "ha-381619-m04" was stopped.
	I1028 17:32:05.621320   37306 stop.go:75] duration metric: took 1.355216989s to stop
	I1028 17:32:05.621350   37306 stop.go:39] StopHost: ha-381619-m03
	I1028 17:32:05.621649   37306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:32:05.621687   37306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:32:05.635548   37306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I1028 17:32:05.635945   37306 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:32:05.636437   37306 main.go:141] libmachine: Using API Version  1
	I1028 17:32:05.636457   37306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:32:05.636780   37306 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:32:05.638570   37306 out.go:177] * Stopping node "ha-381619-m03"  ...
	I1028 17:32:05.639726   37306 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 17:32:05.639757   37306 main.go:141] libmachine: (ha-381619-m03) Calling .DriverName
	I1028 17:32:05.639961   37306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 17:32:05.639986   37306 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHHostname
	I1028 17:32:05.642665   37306 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:32:05.643109   37306 main.go:141] libmachine: (ha-381619-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:8c:62", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:26:51 +0000 UTC Type:0 Mac:52:54:00:d7:8c:62 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-381619-m03 Clientid:01:52:54:00:d7:8c:62}
	I1028 17:32:05.643139   37306 main.go:141] libmachine: (ha-381619-m03) DBG | domain ha-381619-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:d7:8c:62 in network mk-ha-381619
	I1028 17:32:05.643274   37306 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHPort
	I1028 17:32:05.643418   37306 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHKeyPath
	I1028 17:32:05.643552   37306 main.go:141] libmachine: (ha-381619-m03) Calling .GetSSHUsername
	I1028 17:32:05.643680   37306 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m03/id_rsa Username:docker}
	I1028 17:32:05.734513   37306 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 17:32:05.789124   37306 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 17:32:05.845491   37306 main.go:141] libmachine: Stopping "ha-381619-m03"...
	I1028 17:32:05.845514   37306 main.go:141] libmachine: (ha-381619-m03) Calling .GetState
	I1028 17:32:05.846852   37306 main.go:141] libmachine: (ha-381619-m03) Calling .Stop
	I1028 17:32:05.850149   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 0/120
	I1028 17:32:06.851258   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 1/120
	I1028 17:32:07.853055   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 2/120
	I1028 17:32:08.854425   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 3/120
	I1028 17:32:09.855887   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 4/120
	I1028 17:32:10.857999   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 5/120
	I1028 17:32:11.860331   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 6/120
	I1028 17:32:12.861733   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 7/120
	I1028 17:32:13.863259   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 8/120
	I1028 17:32:14.864457   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 9/120
	I1028 17:32:15.866457   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 10/120
	I1028 17:32:16.867862   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 11/120
	I1028 17:32:17.869189   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 12/120
	I1028 17:32:18.870684   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 13/120
	I1028 17:32:19.871978   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 14/120
	I1028 17:32:20.874207   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 15/120
	I1028 17:32:21.875502   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 16/120
	I1028 17:32:22.876928   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 17/120
	I1028 17:32:23.878508   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 18/120
	I1028 17:32:24.879731   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 19/120
	I1028 17:32:25.881585   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 20/120
	I1028 17:32:26.883126   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 21/120
	I1028 17:32:27.884458   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 22/120
	I1028 17:32:28.885854   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 23/120
	I1028 17:32:29.887084   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 24/120
	I1028 17:32:30.888839   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 25/120
	I1028 17:32:31.891151   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 26/120
	I1028 17:32:32.892514   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 27/120
	I1028 17:32:33.894059   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 28/120
	I1028 17:32:34.895292   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 29/120
	I1028 17:32:35.896995   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 30/120
	I1028 17:32:36.898897   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 31/120
	I1028 17:32:37.900067   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 32/120
	I1028 17:32:38.901628   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 33/120
	I1028 17:32:39.902784   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 34/120
	I1028 17:32:40.904366   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 35/120
	I1028 17:32:41.906318   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 36/120
	I1028 17:32:42.907449   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 37/120
	I1028 17:32:43.908811   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 38/120
	I1028 17:32:44.910766   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 39/120
	I1028 17:32:45.912490   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 40/120
	I1028 17:32:46.913818   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 41/120
	I1028 17:32:47.915109   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 42/120
	I1028 17:32:48.916334   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 43/120
	I1028 17:32:49.917689   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 44/120
	I1028 17:32:50.919427   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 45/120
	I1028 17:32:51.920572   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 46/120
	I1028 17:32:52.922269   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 47/120
	I1028 17:32:53.923943   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 48/120
	I1028 17:32:54.925504   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 49/120
	I1028 17:32:55.927602   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 50/120
	I1028 17:32:56.928940   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 51/120
	I1028 17:32:57.930453   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 52/120
	I1028 17:32:58.931891   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 53/120
	I1028 17:32:59.933126   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 54/120
	I1028 17:33:00.934937   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 55/120
	I1028 17:33:01.936101   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 56/120
	I1028 17:33:02.937406   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 57/120
	I1028 17:33:03.938799   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 58/120
	I1028 17:33:04.940080   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 59/120
	I1028 17:33:05.941842   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 60/120
	I1028 17:33:06.943153   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 61/120
	I1028 17:33:07.945156   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 62/120
	I1028 17:33:08.946420   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 63/120
	I1028 17:33:09.948338   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 64/120
	I1028 17:33:10.949943   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 65/120
	I1028 17:33:11.951191   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 66/120
	I1028 17:33:12.952431   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 67/120
	I1028 17:33:13.953747   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 68/120
	I1028 17:33:14.954939   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 69/120
	I1028 17:33:15.956424   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 70/120
	I1028 17:33:16.957792   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 71/120
	I1028 17:33:17.959103   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 72/120
	I1028 17:33:18.960436   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 73/120
	I1028 17:33:19.961614   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 74/120
	I1028 17:33:20.963279   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 75/120
	I1028 17:33:21.964624   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 76/120
	I1028 17:33:22.967032   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 77/120
	I1028 17:33:23.968265   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 78/120
	I1028 17:33:24.969561   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 79/120
	I1028 17:33:25.971133   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 80/120
	I1028 17:33:26.972495   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 81/120
	I1028 17:33:27.973816   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 82/120
	I1028 17:33:28.975202   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 83/120
	I1028 17:33:29.976546   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 84/120
	I1028 17:33:30.977855   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 85/120
	I1028 17:33:31.978971   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 86/120
	I1028 17:33:32.980155   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 87/120
	I1028 17:33:33.981295   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 88/120
	I1028 17:33:34.982609   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 89/120
	I1028 17:33:35.984326   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 90/120
	I1028 17:33:36.986386   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 91/120
	I1028 17:33:37.987532   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 92/120
	I1028 17:33:38.989231   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 93/120
	I1028 17:33:39.990477   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 94/120
	I1028 17:33:40.991912   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 95/120
	I1028 17:33:41.993210   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 96/120
	I1028 17:33:42.994385   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 97/120
	I1028 17:33:43.995725   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 98/120
	I1028 17:33:44.997105   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 99/120
	I1028 17:33:45.998766   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 100/120
	I1028 17:33:47.000034   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 101/120
	I1028 17:33:48.001247   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 102/120
	I1028 17:33:49.002965   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 103/120
	I1028 17:33:50.004201   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 104/120
	I1028 17:33:51.005934   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 105/120
	I1028 17:33:52.007562   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 106/120
	I1028 17:33:53.008841   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 107/120
	I1028 17:33:54.010897   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 108/120
	I1028 17:33:55.012961   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 109/120
	I1028 17:33:56.014677   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 110/120
	I1028 17:33:57.016108   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 111/120
	I1028 17:33:58.017776   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 112/120
	I1028 17:33:59.019050   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 113/120
	I1028 17:34:00.020382   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 114/120
	I1028 17:34:01.022053   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 115/120
	I1028 17:34:02.023523   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 116/120
	I1028 17:34:03.025257   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 117/120
	I1028 17:34:04.026767   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 118/120
	I1028 17:34:05.028126   37306 main.go:141] libmachine: (ha-381619-m03) Waiting for machine to stop 119/120
	I1028 17:34:06.028646   37306 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 17:34:06.028701   37306 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 17:34:06.030753   37306 out.go:201] 
	W1028 17:34:06.032000   37306 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 17:34:06.032017   37306 out.go:270] * 
	* 
	W1028 17:34:06.034397   37306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 17:34:06.035770   37306 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-381619 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-381619 --wait=true -v=7 --alsologtostderr
E1028 17:34:06.098917   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:35:33.436462   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-381619 --wait=true -v=7 --alsologtostderr: (2m23.944708934s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-381619
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-381619 -n ha-381619
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 logs -n 25: (2.473500572s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m04 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp testdata/cp-test.txt                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m04_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03:/home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m03 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-381619 node stop m02 -v=7                                                    | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-381619 node start m02 -v=7                                                   | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:31 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-381619 -v=7                                                          | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:32 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-381619 -v=7                                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:32 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-381619 --wait=true -v=7                                                   | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:34 UTC | 28 Oct 24 17:36 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-381619                                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:36 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:34:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:34:06.081446   37774 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:34:06.081794   37774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:34:06.081849   37774 out.go:358] Setting ErrFile to fd 2...
	I1028 17:34:06.081867   37774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:34:06.082313   37774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:34:06.083179   37774 out.go:352] Setting JSON to false
	I1028 17:34:06.084069   37774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4589,"bootTime":1730132257,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:34:06.084170   37774 start.go:139] virtualization: kvm guest
	I1028 17:34:06.086109   37774 out.go:177] * [ha-381619] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:34:06.087679   37774 notify.go:220] Checking for updates...
	I1028 17:34:06.087695   37774 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:34:06.088908   37774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:34:06.090095   37774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:34:06.091357   37774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:34:06.092669   37774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:34:06.093728   37774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:34:06.095178   37774 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:34:06.095285   37774 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:34:06.095688   37774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:34:06.095725   37774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:34:06.111722   37774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I1028 17:34:06.112173   37774 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:34:06.112845   37774 main.go:141] libmachine: Using API Version  1
	I1028 17:34:06.112883   37774 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:34:06.113307   37774 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:34:06.113510   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:06.146951   37774 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 17:34:06.148167   37774 start.go:297] selected driver: kvm2
	I1028 17:34:06.148181   37774 start.go:901] validating driver "kvm2" against &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:34:06.148298   37774 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:34:06.148628   37774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:34:06.148689   37774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:34:06.162653   37774 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:34:06.163427   37774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:34:06.163461   37774 cni.go:84] Creating CNI manager for ""
	I1028 17:34:06.163514   37774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 17:34:06.163570   37774 start.go:340] cluster config:
	{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:34:06.163703   37774 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:34:06.165272   37774 out.go:177] * Starting "ha-381619" primary control-plane node in "ha-381619" cluster
	I1028 17:34:06.166512   37774 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:34:06.166559   37774 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:34:06.166571   37774 cache.go:56] Caching tarball of preloaded images
	I1028 17:34:06.166630   37774 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:34:06.166640   37774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:34:06.166744   37774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:34:06.166912   37774 start.go:360] acquireMachinesLock for ha-381619: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:34:06.166949   37774 start.go:364] duration metric: took 19.2µs to acquireMachinesLock for "ha-381619"
	I1028 17:34:06.166962   37774 start.go:96] Skipping create...Using existing machine configuration
	I1028 17:34:06.166967   37774 fix.go:54] fixHost starting: 
	I1028 17:34:06.167202   37774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:34:06.167228   37774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:34:06.180368   37774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I1028 17:34:06.180762   37774 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:34:06.181229   37774 main.go:141] libmachine: Using API Version  1
	I1028 17:34:06.181243   37774 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:34:06.181551   37774 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:34:06.181734   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:06.181841   37774 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:34:06.183196   37774 fix.go:112] recreateIfNeeded on ha-381619: state=Running err=<nil>
	W1028 17:34:06.183215   37774 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 17:34:06.184878   37774 out.go:177] * Updating the running kvm2 "ha-381619" VM ...
	I1028 17:34:06.186039   37774 machine.go:93] provisionDockerMachine start ...
	I1028 17:34:06.186065   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:06.186218   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.188658   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.189099   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.189124   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.189242   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.189412   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.189547   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.189644   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.189754   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.189915   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.189923   37774 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 17:34:06.297544   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:34:06.297566   37774 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:34:06.297808   37774 buildroot.go:166] provisioning hostname "ha-381619"
	I1028 17:34:06.297833   37774 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:34:06.298017   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.300611   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.300973   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.301008   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.301187   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.301365   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.301534   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.301714   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.301875   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.302081   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.302098   37774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619 && echo "ha-381619" | sudo tee /etc/hostname
	I1028 17:34:06.417209   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:34:06.417247   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.420052   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.420426   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.420449   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.420610   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.420765   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.420955   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.421089   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.421234   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.421440   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.421459   37774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:34:06.526705   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:34:06.526730   37774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:34:06.526769   37774 buildroot.go:174] setting up certificates
	I1028 17:34:06.526785   37774 provision.go:84] configureAuth start
	I1028 17:34:06.526800   37774 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:34:06.527017   37774 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:34:06.529755   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.530100   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.530129   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.530296   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.532429   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.532793   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.532815   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.532943   37774 provision.go:143] copyHostCerts
	I1028 17:34:06.532975   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:34:06.533031   37774 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:34:06.533071   37774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:34:06.533159   37774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:34:06.533245   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:34:06.533273   37774 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:34:06.533281   37774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:34:06.533318   37774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:34:06.533371   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:34:06.533395   37774 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:34:06.533404   37774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:34:06.533435   37774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:34:06.533508   37774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619 san=[127.0.0.1 192.168.39.230 ha-381619 localhost minikube]
	I1028 17:34:06.790443   37774 provision.go:177] copyRemoteCerts
	I1028 17:34:06.790492   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:34:06.790513   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.792989   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.793340   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.793371   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.793555   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.793743   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.793897   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.794037   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:06.874991   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:34:06.875068   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:34:06.899939   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:34:06.900007   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 17:34:06.925082   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:34:06.925134   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:34:06.951736   37774 provision.go:87] duration metric: took 424.938946ms to configureAuth
	I1028 17:34:06.951776   37774 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:34:06.952002   37774 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:34:06.952084   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.954553   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.954864   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.954892   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.955053   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.955252   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.955412   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.955514   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.955615   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.955811   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.955838   37774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:34:12.541953   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:34:12.541981   37774 machine.go:96] duration metric: took 6.355927371s to provisionDockerMachine
	I1028 17:34:12.541994   37774 start.go:293] postStartSetup for "ha-381619" (driver="kvm2")
	I1028 17:34:12.542007   37774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:34:12.542044   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.542484   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:34:12.542515   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.545152   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.545571   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.545599   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.545779   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.545952   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.546086   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.546225   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:12.627004   37774 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:34:12.631066   37774 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:34:12.631092   37774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:34:12.631161   37774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:34:12.631266   37774 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:34:12.631280   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:34:12.631403   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:34:12.640805   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:34:12.663783   37774 start.go:296] duration metric: took 121.775784ms for postStartSetup
	I1028 17:34:12.663819   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.664061   37774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1028 17:34:12.664083   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.666556   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.666886   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.666913   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.667030   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.667192   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.667343   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.667456   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	W1028 17:34:12.746859   37774 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1028 17:34:12.746885   37774 fix.go:56] duration metric: took 6.579917404s for fixHost
	I1028 17:34:12.746907   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.749219   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.749530   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.749554   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.749701   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.749871   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.749991   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.750131   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.750255   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:12.750435   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:12.750445   37774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:34:12.853006   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136852.809902149
	
	I1028 17:34:12.853046   37774 fix.go:216] guest clock: 1730136852.809902149
	I1028 17:34:12.853057   37774 fix.go:229] Guest: 2024-10-28 17:34:12.809902149 +0000 UTC Remote: 2024-10-28 17:34:12.746893174 +0000 UTC m=+6.700949872 (delta=63.008975ms)
	I1028 17:34:12.853087   37774 fix.go:200] guest clock delta is within tolerance: 63.008975ms
	I1028 17:34:12.853095   37774 start.go:83] releasing machines lock for "ha-381619", held for 6.686136886s
	I1028 17:34:12.853120   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.853347   37774 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:34:12.855659   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.856087   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.856116   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.856250   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.856791   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.856972   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.857056   37774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:34:12.857103   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.857130   37774 ssh_runner.go:195] Run: cat /version.json
	I1028 17:34:12.857148   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.859421   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.859665   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.859816   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.859842   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.859974   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.860116   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.860122   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.860155   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.860260   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.860264   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.860386   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:12.860441   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.860547   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.860666   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:12.961793   37774 ssh_runner.go:195] Run: systemctl --version
	I1028 17:34:12.967487   37774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:34:13.121093   37774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:34:13.126876   37774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:34:13.126937   37774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:34:13.135833   37774 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 17:34:13.135852   37774 start.go:495] detecting cgroup driver to use...
	I1028 17:34:13.135910   37774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:34:13.151508   37774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:34:13.164573   37774 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:34:13.164612   37774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:34:13.177244   37774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:34:13.189820   37774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:34:13.325059   37774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:34:13.461542   37774 docker.go:233] disabling docker service ...
	I1028 17:34:13.461612   37774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:34:13.476956   37774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:34:13.489988   37774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:34:13.622617   37774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:34:13.757488   37774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:34:13.771459   37774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:34:13.791274   37774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:34:13.791344   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.801454   37774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:34:13.801514   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.811397   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.821235   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.831380   37774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:34:13.841461   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.851568   37774 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.861716   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.872128   37774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:34:13.881325   37774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:34:13.890341   37774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:34:14.036863   37774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:34:14.231725   37774 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:34:14.231779   37774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:34:14.237005   37774 start.go:563] Will wait 60s for crictl version
	I1028 17:34:14.237038   37774 ssh_runner.go:195] Run: which crictl
	I1028 17:34:14.240982   37774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:34:14.279184   37774 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:34:14.279242   37774 ssh_runner.go:195] Run: crio --version
	I1028 17:34:14.309098   37774 ssh_runner.go:195] Run: crio --version
	I1028 17:34:14.348740   37774 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:34:14.350029   37774 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:34:14.352430   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:14.352800   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:14.352819   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:14.353007   37774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:34:14.357785   37774 kubeadm.go:883] updating cluster {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:34:14.357929   37774 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:34:14.357967   37774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:34:14.399010   37774 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:34:14.399028   37774 crio.go:433] Images already preloaded, skipping extraction
	I1028 17:34:14.399081   37774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:34:14.431546   37774 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:34:14.431562   37774 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:34:14.431571   37774 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.2 crio true true} ...
	I1028 17:34:14.431664   37774 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:34:14.431720   37774 ssh_runner.go:195] Run: crio config
	I1028 17:34:14.481452   37774 cni.go:84] Creating CNI manager for ""
	I1028 17:34:14.481472   37774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 17:34:14.481483   37774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:34:14.481517   37774 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-381619 NodeName:ha-381619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:34:14.481659   37774 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-381619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:34:14.481681   37774 kube-vip.go:115] generating kube-vip config ...
	I1028 17:34:14.481734   37774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:34:14.493141   37774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:34:14.493243   37774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:34:14.493288   37774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:34:14.503177   37774 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:34:14.503265   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 17:34:14.512392   37774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 17:34:14.528555   37774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:34:14.544374   37774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 17:34:14.560264   37774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:34:14.577007   37774 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:34:14.581972   37774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:34:14.714503   37774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:34:14.729411   37774 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.230
	I1028 17:34:14.729430   37774 certs.go:194] generating shared ca certs ...
	I1028 17:34:14.729444   37774 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:34:14.729603   37774 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:34:14.729659   37774 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:34:14.729672   37774 certs.go:256] generating profile certs ...
	I1028 17:34:14.729783   37774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:34:14.729815   37774 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd
	I1028 17:34:14.729835   37774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.17 192.168.39.254]
	I1028 17:34:14.782067   37774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd ...
	I1028 17:34:14.782093   37774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd: {Name:mkb247ab2c4d11778d7be3979ba86e665737952f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:34:14.782267   37774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd ...
	I1028 17:34:14.782286   37774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd: {Name:mkdd41c5146a9e432f4d3ba9dadb2655d7828245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:34:14.782378   37774 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:34:14.782583   37774 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:34:14.782735   37774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:34:14.782752   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:34:14.782769   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:34:14.782788   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:34:14.782805   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:34:14.782821   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:34:14.782837   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:34:14.782851   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:34:14.782871   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:34:14.782934   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:34:14.782965   37774 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:34:14.782978   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:34:14.783015   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:34:14.783042   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:34:14.783073   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:34:14.783126   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:34:14.783159   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:14.783177   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:34:14.783198   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:34:14.783767   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:34:14.810324   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:34:14.834198   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:34:14.858547   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:34:14.881927   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 17:34:14.904418   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:34:14.927501   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:34:14.950808   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:34:15.045068   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:34:15.143693   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:34:15.280696   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:34:15.486969   37774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:34:15.649604   37774 ssh_runner.go:195] Run: openssl version
	I1028 17:34:15.701599   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:34:15.772288   37774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:15.788243   37774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:15.788303   37774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:15.805157   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:34:15.824755   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:34:15.849874   37774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:34:15.855169   37774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:34:15.855207   37774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:34:15.879573   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:34:15.912134   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:34:15.949460   37774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:34:16.001913   37774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:34:16.001975   37774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:34:16.050584   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:34:16.104909   37774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:34:16.139249   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 17:34:16.174256   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 17:34:16.285012   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 17:34:16.308578   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 17:34:16.323178   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 17:34:16.329567   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 17:34:16.342448   37774 kubeadm.go:392] StartCluster: {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:34:16.342545   37774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:34:16.342582   37774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:34:16.404708   37774 cri.go:89] found id: "8a2fd7aacabd8e90ca1bfaa4e7c8134927378a79300e93e8500b6bc1fc630e39"
	I1028 17:34:16.404733   37774 cri.go:89] found id: "c3354a04d7be15ef43b17c35238a8192be419ccef672bd66b48f416ea9fcf3b7"
	I1028 17:34:16.404736   37774 cri.go:89] found id: "0bf77b5a62be4994778774cd52e855f345b092d89ef59779f104d94cbbb1db90"
	I1028 17:34:16.404739   37774 cri.go:89] found id: "ca42fffe1586b554a0db318b47779fafce0e50167256f25a9fc7f4b48bfc059a"
	I1028 17:34:16.404742   37774 cri.go:89] found id: "725ced7876ed08889d3f74fa5c4c8a33ecd26da44bbc1c0d7ff6b21dc527f663"
	I1028 17:34:16.404751   37774 cri.go:89] found id: "1060913f6886b0b7021792342930ce4fbeb774054258798ad5176a69344123ee"
	I1028 17:34:16.404754   37774 cri.go:89] found id: "4fa4ef36f67f276908cd6a9ae9defd7fe8b1ba8d88506d3320a2613a448a2284"
	I1028 17:34:16.404757   37774 cri.go:89] found id: "da12f85c717594334ae8f6486a0297a2c55f58c4a1f00fde1b6833547e695980"
	I1028 17:34:16.404760   37774 cri.go:89] found id: "3179f8b1830b7354e99efad950aedae97caf639e12dcb0124c5ddc9795338d37"
	I1028 17:34:16.404767   37774 cri.go:89] found id: "d36a9d087e6521b56264baef50be0d64c0e582e8f59495ebbb576fd5c145290b"
	I1028 17:34:16.404771   37774 cri.go:89] found id: "fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30"
	I1028 17:34:16.404776   37774 cri.go:89] found id: "439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f"
	I1028 17:34:16.404780   37774 cri.go:89] found id: "02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3"
	I1028 17:34:16.404784   37774 cri.go:89] found id: "4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa"
	I1028 17:34:16.404791   37774 cri.go:89] found id: "c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8"
	I1028 17:34:16.404795   37774 cri.go:89] found id: "5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9"
	I1028 17:34:16.404800   37774 cri.go:89] found id: "8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b"
	I1028 17:34:16.404805   37774 cri.go:89] found id: ""
	I1028 17:34:16.404853   37774 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-381619 -n ha-381619
helpers_test.go:261: (dbg) Run:  kubectl --context ha-381619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (268.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 stop -v=7 --alsologtostderr
E1028 17:36:56.504570   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:38:38.394857   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-381619 stop -v=7 --alsologtostderr: exit status 82 (2m0.450138843s)

                                                
                                                
-- stdout --
	* Stopping node "ha-381619-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:36:50.383564   39156 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:36:50.383657   39156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:36:50.383665   39156 out.go:358] Setting ErrFile to fd 2...
	I1028 17:36:50.383669   39156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:36:50.383874   39156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:36:50.384113   39156 out.go:352] Setting JSON to false
	I1028 17:36:50.384180   39156 mustload.go:65] Loading cluster: ha-381619
	I1028 17:36:50.384679   39156 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:36:50.384813   39156 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:36:50.385051   39156 mustload.go:65] Loading cluster: ha-381619
	I1028 17:36:50.385240   39156 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:36:50.385269   39156 stop.go:39] StopHost: ha-381619-m04
	I1028 17:36:50.385640   39156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:36:50.385680   39156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:36:50.399887   39156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42445
	I1028 17:36:50.400401   39156 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:36:50.401066   39156 main.go:141] libmachine: Using API Version  1
	I1028 17:36:50.401095   39156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:36:50.401435   39156 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:36:50.403618   39156 out.go:177] * Stopping node "ha-381619-m04"  ...
	I1028 17:36:50.405223   39156 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 17:36:50.405254   39156 main.go:141] libmachine: (ha-381619-m04) Calling .DriverName
	I1028 17:36:50.405457   39156 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 17:36:50.405482   39156 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHHostname
	I1028 17:36:50.407968   39156 main.go:141] libmachine: (ha-381619-m04) DBG | domain ha-381619-m04 has defined MAC address 52:54:00:6b:0d:06 in network mk-ha-381619
	I1028 17:36:50.408387   39156 main.go:141] libmachine: (ha-381619-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0d:06", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:36:17 +0000 UTC Type:0 Mac:52:54:00:6b:0d:06 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-381619-m04 Clientid:01:52:54:00:6b:0d:06}
	I1028 17:36:50.408413   39156 main.go:141] libmachine: (ha-381619-m04) DBG | domain ha-381619-m04 has defined IP address 192.168.39.224 and MAC address 52:54:00:6b:0d:06 in network mk-ha-381619
	I1028 17:36:50.408522   39156 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHPort
	I1028 17:36:50.408656   39156 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHKeyPath
	I1028 17:36:50.408765   39156 main.go:141] libmachine: (ha-381619-m04) Calling .GetSSHUsername
	I1028 17:36:50.408871   39156 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619-m04/id_rsa Username:docker}
	I1028 17:36:50.495153   39156 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 17:36:50.547590   39156 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 17:36:50.600329   39156 main.go:141] libmachine: Stopping "ha-381619-m04"...
	I1028 17:36:50.600368   39156 main.go:141] libmachine: (ha-381619-m04) Calling .GetState
	I1028 17:36:50.601861   39156 main.go:141] libmachine: (ha-381619-m04) Calling .Stop
	I1028 17:36:50.605195   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 0/120
	I1028 17:36:51.606479   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 1/120
	I1028 17:36:52.607691   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 2/120
	I1028 17:36:53.609119   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 3/120
	I1028 17:36:54.610397   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 4/120
	I1028 17:36:55.612421   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 5/120
	I1028 17:36:56.613625   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 6/120
	I1028 17:36:57.614835   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 7/120
	I1028 17:36:58.616028   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 8/120
	I1028 17:36:59.617204   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 9/120
	I1028 17:37:00.619476   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 10/120
	I1028 17:37:01.620754   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 11/120
	I1028 17:37:02.622024   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 12/120
	I1028 17:37:03.623379   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 13/120
	I1028 17:37:04.624674   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 14/120
	I1028 17:37:05.626384   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 15/120
	I1028 17:37:06.627611   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 16/120
	I1028 17:37:07.628874   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 17/120
	I1028 17:37:08.631199   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 18/120
	I1028 17:37:09.632358   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 19/120
	I1028 17:37:10.634347   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 20/120
	I1028 17:37:11.635483   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 21/120
	I1028 17:37:12.637671   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 22/120
	I1028 17:37:13.638769   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 23/120
	I1028 17:37:14.639944   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 24/120
	I1028 17:37:15.641808   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 25/120
	I1028 17:37:16.643674   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 26/120
	I1028 17:37:17.645873   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 27/120
	I1028 17:37:18.646955   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 28/120
	I1028 17:37:19.648715   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 29/120
	I1028 17:37:20.650488   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 30/120
	I1028 17:37:21.651728   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 31/120
	I1028 17:37:22.652844   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 32/120
	I1028 17:37:23.654003   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 33/120
	I1028 17:37:24.655314   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 34/120
	I1028 17:37:25.657119   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 35/120
	I1028 17:37:26.658929   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 36/120
	I1028 17:37:27.660283   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 37/120
	I1028 17:37:28.661769   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 38/120
	I1028 17:37:29.662963   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 39/120
	I1028 17:37:30.665076   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 40/120
	I1028 17:37:31.667240   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 41/120
	I1028 17:37:32.669194   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 42/120
	I1028 17:37:33.670600   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 43/120
	I1028 17:37:34.671708   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 44/120
	I1028 17:37:35.673525   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 45/120
	I1028 17:37:36.674848   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 46/120
	I1028 17:37:37.675963   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 47/120
	I1028 17:37:38.677570   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 48/120
	I1028 17:37:39.678661   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 49/120
	I1028 17:37:40.680639   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 50/120
	I1028 17:37:41.681782   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 51/120
	I1028 17:37:42.682898   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 52/120
	I1028 17:37:43.684054   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 53/120
	I1028 17:37:44.685288   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 54/120
	I1028 17:37:45.686751   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 55/120
	I1028 17:37:46.688770   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 56/120
	I1028 17:37:47.689868   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 57/120
	I1028 17:37:48.691143   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 58/120
	I1028 17:37:49.692324   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 59/120
	I1028 17:37:50.693926   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 60/120
	I1028 17:37:51.695095   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 61/120
	I1028 17:37:52.696228   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 62/120
	I1028 17:37:53.697734   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 63/120
	I1028 17:37:54.699170   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 64/120
	I1028 17:37:55.700975   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 65/120
	I1028 17:37:56.702664   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 66/120
	I1028 17:37:57.704345   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 67/120
	I1028 17:37:58.705553   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 68/120
	I1028 17:37:59.707364   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 69/120
	I1028 17:38:00.708672   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 70/120
	I1028 17:38:01.709883   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 71/120
	I1028 17:38:02.710941   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 72/120
	I1028 17:38:03.712182   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 73/120
	I1028 17:38:04.713225   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 74/120
	I1028 17:38:05.714997   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 75/120
	I1028 17:38:06.716066   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 76/120
	I1028 17:38:07.717545   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 77/120
	I1028 17:38:08.718804   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 78/120
	I1028 17:38:09.720245   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 79/120
	I1028 17:38:10.722073   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 80/120
	I1028 17:38:11.723266   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 81/120
	I1028 17:38:12.724614   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 82/120
	I1028 17:38:13.726915   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 83/120
	I1028 17:38:14.728261   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 84/120
	I1028 17:38:15.730178   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 85/120
	I1028 17:38:16.731493   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 86/120
	I1028 17:38:17.732845   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 87/120
	I1028 17:38:18.734938   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 88/120
	I1028 17:38:19.736188   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 89/120
	I1028 17:38:20.737930   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 90/120
	I1028 17:38:21.739170   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 91/120
	I1028 17:38:22.740838   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 92/120
	I1028 17:38:23.742756   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 93/120
	I1028 17:38:24.744175   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 94/120
	I1028 17:38:25.746189   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 95/120
	I1028 17:38:26.747405   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 96/120
	I1028 17:38:27.749345   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 97/120
	I1028 17:38:28.750671   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 98/120
	I1028 17:38:29.751788   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 99/120
	I1028 17:38:30.753750   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 100/120
	I1028 17:38:31.755011   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 101/120
	I1028 17:38:32.756804   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 102/120
	I1028 17:38:33.758074   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 103/120
	I1028 17:38:34.759603   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 104/120
	I1028 17:38:35.761623   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 105/120
	I1028 17:38:36.762812   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 106/120
	I1028 17:38:37.763931   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 107/120
	I1028 17:38:38.765126   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 108/120
	I1028 17:38:39.767115   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 109/120
	I1028 17:38:40.768889   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 110/120
	I1028 17:38:41.770063   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 111/120
	I1028 17:38:42.771463   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 112/120
	I1028 17:38:43.772766   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 113/120
	I1028 17:38:44.773915   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 114/120
	I1028 17:38:45.775417   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 115/120
	I1028 17:38:46.776676   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 116/120
	I1028 17:38:47.778000   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 117/120
	I1028 17:38:48.779093   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 118/120
	I1028 17:38:49.780439   39156 main.go:141] libmachine: (ha-381619-m04) Waiting for machine to stop 119/120
	I1028 17:38:50.781104   39156 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 17:38:50.781197   39156 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 17:38:50.783232   39156 out.go:201] 
	W1028 17:38:50.784556   39156 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 17:38:50.784572   39156 out.go:270] * 
	* 
	W1028 17:38:50.786946   39156 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 17:38:50.788140   39156 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-381619 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr: (18.939407346s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-381619 -n ha-381619
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 logs -n 25: (1.919661824s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m04 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp testdata/cp-test.txt                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619:/home/docker/cp-test_ha-381619-m04_ha-381619.txt                      |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619 sudo cat                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619.txt                                |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m02:/home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m02 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m03:/home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n                                                                | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | ha-381619-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-381619 ssh -n ha-381619-m03 sudo cat                                         | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC | 28 Oct 24 17:29 UTC |
	|         | /home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-381619 node stop m02 -v=7                                                    | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:29 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-381619 node start m02 -v=7                                                   | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:31 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-381619 -v=7                                                          | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:32 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-381619 -v=7                                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:32 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-381619 --wait=true -v=7                                                   | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:34 UTC | 28 Oct 24 17:36 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-381619                                                               | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:36 UTC |                     |
	| node    | ha-381619 node delete m03 -v=7                                                  | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:36 UTC | 28 Oct 24 17:36 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-381619 stop -v=7                                                             | ha-381619 | jenkins | v1.34.0 | 28 Oct 24 17:36 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:34:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:34:06.081446   37774 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:34:06.081794   37774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:34:06.081849   37774 out.go:358] Setting ErrFile to fd 2...
	I1028 17:34:06.081867   37774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:34:06.082313   37774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:34:06.083179   37774 out.go:352] Setting JSON to false
	I1028 17:34:06.084069   37774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4589,"bootTime":1730132257,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:34:06.084170   37774 start.go:139] virtualization: kvm guest
	I1028 17:34:06.086109   37774 out.go:177] * [ha-381619] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:34:06.087679   37774 notify.go:220] Checking for updates...
	I1028 17:34:06.087695   37774 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:34:06.088908   37774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:34:06.090095   37774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:34:06.091357   37774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:34:06.092669   37774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:34:06.093728   37774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:34:06.095178   37774 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:34:06.095285   37774 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:34:06.095688   37774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:34:06.095725   37774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:34:06.111722   37774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I1028 17:34:06.112173   37774 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:34:06.112845   37774 main.go:141] libmachine: Using API Version  1
	I1028 17:34:06.112883   37774 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:34:06.113307   37774 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:34:06.113510   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:06.146951   37774 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 17:34:06.148167   37774 start.go:297] selected driver: kvm2
	I1028 17:34:06.148181   37774 start.go:901] validating driver "kvm2" against &{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:34:06.148298   37774 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:34:06.148628   37774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:34:06.148689   37774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:34:06.162653   37774 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:34:06.163427   37774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:34:06.163461   37774 cni.go:84] Creating CNI manager for ""
	I1028 17:34:06.163514   37774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 17:34:06.163570   37774 start.go:340] cluster config:
	{Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:34:06.163703   37774 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:34:06.165272   37774 out.go:177] * Starting "ha-381619" primary control-plane node in "ha-381619" cluster
	I1028 17:34:06.166512   37774 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:34:06.166559   37774 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:34:06.166571   37774 cache.go:56] Caching tarball of preloaded images
	I1028 17:34:06.166630   37774 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:34:06.166640   37774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:34:06.166744   37774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/config.json ...
	I1028 17:34:06.166912   37774 start.go:360] acquireMachinesLock for ha-381619: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:34:06.166949   37774 start.go:364] duration metric: took 19.2µs to acquireMachinesLock for "ha-381619"
	I1028 17:34:06.166962   37774 start.go:96] Skipping create...Using existing machine configuration
	I1028 17:34:06.166967   37774 fix.go:54] fixHost starting: 
	I1028 17:34:06.167202   37774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:34:06.167228   37774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:34:06.180368   37774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I1028 17:34:06.180762   37774 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:34:06.181229   37774 main.go:141] libmachine: Using API Version  1
	I1028 17:34:06.181243   37774 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:34:06.181551   37774 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:34:06.181734   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:06.181841   37774 main.go:141] libmachine: (ha-381619) Calling .GetState
	I1028 17:34:06.183196   37774 fix.go:112] recreateIfNeeded on ha-381619: state=Running err=<nil>
	W1028 17:34:06.183215   37774 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 17:34:06.184878   37774 out.go:177] * Updating the running kvm2 "ha-381619" VM ...
	I1028 17:34:06.186039   37774 machine.go:93] provisionDockerMachine start ...
	I1028 17:34:06.186065   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:06.186218   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.188658   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.189099   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.189124   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.189242   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.189412   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.189547   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.189644   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.189754   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.189915   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.189923   37774 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 17:34:06.297544   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:34:06.297566   37774 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:34:06.297808   37774 buildroot.go:166] provisioning hostname "ha-381619"
	I1028 17:34:06.297833   37774 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:34:06.298017   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.300611   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.300973   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.301008   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.301187   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.301365   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.301534   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.301714   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.301875   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.302081   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.302098   37774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-381619 && echo "ha-381619" | sudo tee /etc/hostname
	I1028 17:34:06.417209   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-381619
	
	I1028 17:34:06.417247   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.420052   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.420426   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.420449   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.420610   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.420765   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.420955   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.421089   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.421234   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.421440   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.421459   37774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-381619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-381619/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-381619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:34:06.526705   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:34:06.526730   37774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:34:06.526769   37774 buildroot.go:174] setting up certificates
	I1028 17:34:06.526785   37774 provision.go:84] configureAuth start
	I1028 17:34:06.526800   37774 main.go:141] libmachine: (ha-381619) Calling .GetMachineName
	I1028 17:34:06.527017   37774 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:34:06.529755   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.530100   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.530129   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.530296   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.532429   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.532793   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.532815   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.532943   37774 provision.go:143] copyHostCerts
	I1028 17:34:06.532975   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:34:06.533031   37774 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:34:06.533071   37774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:34:06.533159   37774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:34:06.533245   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:34:06.533273   37774 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:34:06.533281   37774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:34:06.533318   37774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:34:06.533371   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:34:06.533395   37774 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:34:06.533404   37774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:34:06.533435   37774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:34:06.533508   37774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.ha-381619 san=[127.0.0.1 192.168.39.230 ha-381619 localhost minikube]
	I1028 17:34:06.790443   37774 provision.go:177] copyRemoteCerts
	I1028 17:34:06.790492   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:34:06.790513   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.792989   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.793340   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.793371   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.793555   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.793743   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.793897   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.794037   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:06.874991   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:34:06.875068   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:34:06.899939   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:34:06.900007   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 17:34:06.925082   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:34:06.925134   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 17:34:06.951736   37774 provision.go:87] duration metric: took 424.938946ms to configureAuth
	I1028 17:34:06.951776   37774 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:34:06.952002   37774 config.go:182] Loaded profile config "ha-381619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:34:06.952084   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:06.954553   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.954864   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:06.954892   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:06.955053   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:06.955252   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.955412   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:06.955514   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:06.955615   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:06.955811   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:06.955838   37774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:34:12.541953   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:34:12.541981   37774 machine.go:96] duration metric: took 6.355927371s to provisionDockerMachine
	I1028 17:34:12.541994   37774 start.go:293] postStartSetup for "ha-381619" (driver="kvm2")
	I1028 17:34:12.542007   37774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:34:12.542044   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.542484   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:34:12.542515   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.545152   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.545571   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.545599   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.545779   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.545952   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.546086   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.546225   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:12.627004   37774 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:34:12.631066   37774 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:34:12.631092   37774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:34:12.631161   37774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:34:12.631266   37774 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:34:12.631280   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:34:12.631403   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:34:12.640805   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:34:12.663783   37774 start.go:296] duration metric: took 121.775784ms for postStartSetup
	I1028 17:34:12.663819   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.664061   37774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1028 17:34:12.664083   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.666556   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.666886   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.666913   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.667030   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.667192   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.667343   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.667456   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	W1028 17:34:12.746859   37774 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1028 17:34:12.746885   37774 fix.go:56] duration metric: took 6.579917404s for fixHost
	I1028 17:34:12.746907   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.749219   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.749530   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.749554   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.749701   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.749871   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.749991   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.750131   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.750255   37774 main.go:141] libmachine: Using SSH client type: native
	I1028 17:34:12.750435   37774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 17:34:12.750445   37774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:34:12.853006   37774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730136852.809902149
	
	I1028 17:34:12.853046   37774 fix.go:216] guest clock: 1730136852.809902149
	I1028 17:34:12.853057   37774 fix.go:229] Guest: 2024-10-28 17:34:12.809902149 +0000 UTC Remote: 2024-10-28 17:34:12.746893174 +0000 UTC m=+6.700949872 (delta=63.008975ms)
	I1028 17:34:12.853087   37774 fix.go:200] guest clock delta is within tolerance: 63.008975ms
	I1028 17:34:12.853095   37774 start.go:83] releasing machines lock for "ha-381619", held for 6.686136886s
	I1028 17:34:12.853120   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.853347   37774 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:34:12.855659   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.856087   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.856116   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.856250   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.856791   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.856972   37774 main.go:141] libmachine: (ha-381619) Calling .DriverName
	I1028 17:34:12.857056   37774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:34:12.857103   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.857130   37774 ssh_runner.go:195] Run: cat /version.json
	I1028 17:34:12.857148   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHHostname
	I1028 17:34:12.859421   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.859665   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.859816   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.859842   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.859974   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.860116   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.860122   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:12.860155   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:12.860260   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.860264   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHPort
	I1028 17:34:12.860386   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:12.860441   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHKeyPath
	I1028 17:34:12.860547   37774 main.go:141] libmachine: (ha-381619) Calling .GetSSHUsername
	I1028 17:34:12.860666   37774 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/ha-381619/id_rsa Username:docker}
	I1028 17:34:12.961793   37774 ssh_runner.go:195] Run: systemctl --version
	I1028 17:34:12.967487   37774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:34:13.121093   37774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 17:34:13.126876   37774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:34:13.126937   37774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:34:13.135833   37774 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 17:34:13.135852   37774 start.go:495] detecting cgroup driver to use...
	I1028 17:34:13.135910   37774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:34:13.151508   37774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:34:13.164573   37774 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:34:13.164612   37774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:34:13.177244   37774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:34:13.189820   37774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:34:13.325059   37774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:34:13.461542   37774 docker.go:233] disabling docker service ...
	I1028 17:34:13.461612   37774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:34:13.476956   37774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:34:13.489988   37774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:34:13.622617   37774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:34:13.757488   37774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:34:13.771459   37774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:34:13.791274   37774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:34:13.791344   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.801454   37774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:34:13.801514   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.811397   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.821235   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.831380   37774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:34:13.841461   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.851568   37774 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.861716   37774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:34:13.872128   37774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:34:13.881325   37774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:34:13.890341   37774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:34:14.036863   37774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:34:14.231725   37774 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:34:14.231779   37774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:34:14.237005   37774 start.go:563] Will wait 60s for crictl version
	I1028 17:34:14.237038   37774 ssh_runner.go:195] Run: which crictl
	I1028 17:34:14.240982   37774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:34:14.279184   37774 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:34:14.279242   37774 ssh_runner.go:195] Run: crio --version
	I1028 17:34:14.309098   37774 ssh_runner.go:195] Run: crio --version
	I1028 17:34:14.348740   37774 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:34:14.350029   37774 main.go:141] libmachine: (ha-381619) Calling .GetIP
	I1028 17:34:14.352430   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:14.352800   37774 main.go:141] libmachine: (ha-381619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:e3:f2", ip: ""} in network mk-ha-381619: {Iface:virbr1 ExpiryTime:2024-10-28 18:24:47 +0000 UTC Type:0 Mac:52:54:00:bf:e3:f2 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-381619 Clientid:01:52:54:00:bf:e3:f2}
	I1028 17:34:14.352819   37774 main.go:141] libmachine: (ha-381619) DBG | domain ha-381619 has defined IP address 192.168.39.230 and MAC address 52:54:00:bf:e3:f2 in network mk-ha-381619
	I1028 17:34:14.353007   37774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:34:14.357785   37774 kubeadm.go:883] updating cluster {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:34:14.357929   37774 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:34:14.357967   37774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:34:14.399010   37774 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:34:14.399028   37774 crio.go:433] Images already preloaded, skipping extraction
	I1028 17:34:14.399081   37774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:34:14.431546   37774 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:34:14.431562   37774 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:34:14.431571   37774 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.2 crio true true} ...
	I1028 17:34:14.431664   37774 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-381619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:34:14.431720   37774 ssh_runner.go:195] Run: crio config
	I1028 17:34:14.481452   37774 cni.go:84] Creating CNI manager for ""
	I1028 17:34:14.481472   37774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 17:34:14.481483   37774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:34:14.481517   37774 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-381619 NodeName:ha-381619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:34:14.481659   37774 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-381619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:34:14.481681   37774 kube-vip.go:115] generating kube-vip config ...
	I1028 17:34:14.481734   37774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 17:34:14.493141   37774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 17:34:14.493243   37774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 17:34:14.493288   37774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:34:14.503177   37774 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:34:14.503265   37774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 17:34:14.512392   37774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 17:34:14.528555   37774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:34:14.544374   37774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 17:34:14.560264   37774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 17:34:14.577007   37774 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 17:34:14.581972   37774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:34:14.714503   37774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:34:14.729411   37774 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619 for IP: 192.168.39.230
	I1028 17:34:14.729430   37774 certs.go:194] generating shared ca certs ...
	I1028 17:34:14.729444   37774 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:34:14.729603   37774 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:34:14.729659   37774 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:34:14.729672   37774 certs.go:256] generating profile certs ...
	I1028 17:34:14.729783   37774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/client.key
	I1028 17:34:14.729815   37774 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd
	I1028 17:34:14.729835   37774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230 192.168.39.171 192.168.39.17 192.168.39.254]
	I1028 17:34:14.782067   37774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd ...
	I1028 17:34:14.782093   37774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd: {Name:mkb247ab2c4d11778d7be3979ba86e665737952f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:34:14.782267   37774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd ...
	I1028 17:34:14.782286   37774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd: {Name:mkdd41c5146a9e432f4d3ba9dadb2655d7828245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:34:14.782378   37774 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt.524980cd -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt
	I1028 17:34:14.782583   37774 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key.524980cd -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key
	I1028 17:34:14.782735   37774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key
	I1028 17:34:14.782752   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:34:14.782769   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:34:14.782788   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:34:14.782805   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:34:14.782821   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:34:14.782837   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:34:14.782851   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:34:14.782871   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:34:14.782934   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:34:14.782965   37774 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:34:14.782978   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:34:14.783015   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:34:14.783042   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:34:14.783073   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:34:14.783126   37774 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:34:14.783159   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:14.783177   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:34:14.783198   37774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:34:14.783767   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:34:14.810324   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:34:14.834198   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:34:14.858547   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:34:14.881927   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 17:34:14.904418   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:34:14.927501   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:34:14.950808   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/ha-381619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:34:15.045068   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:34:15.143693   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:34:15.280696   37774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:34:15.486969   37774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:34:15.649604   37774 ssh_runner.go:195] Run: openssl version
	I1028 17:34:15.701599   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:34:15.772288   37774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:15.788243   37774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:15.788303   37774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:34:15.805157   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:34:15.824755   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:34:15.849874   37774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:34:15.855169   37774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:34:15.855207   37774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:34:15.879573   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:34:15.912134   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:34:15.949460   37774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:34:16.001913   37774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:34:16.001975   37774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:34:16.050584   37774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:34:16.104909   37774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:34:16.139249   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 17:34:16.174256   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 17:34:16.285012   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 17:34:16.308578   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 17:34:16.323178   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 17:34:16.329567   37774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 17:34:16.342448   37774 kubeadm.go:392] StartCluster: {Name:ha-381619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-381619 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.224 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:34:16.342545   37774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:34:16.342582   37774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:34:16.404708   37774 cri.go:89] found id: "8a2fd7aacabd8e90ca1bfaa4e7c8134927378a79300e93e8500b6bc1fc630e39"
	I1028 17:34:16.404733   37774 cri.go:89] found id: "c3354a04d7be15ef43b17c35238a8192be419ccef672bd66b48f416ea9fcf3b7"
	I1028 17:34:16.404736   37774 cri.go:89] found id: "0bf77b5a62be4994778774cd52e855f345b092d89ef59779f104d94cbbb1db90"
	I1028 17:34:16.404739   37774 cri.go:89] found id: "ca42fffe1586b554a0db318b47779fafce0e50167256f25a9fc7f4b48bfc059a"
	I1028 17:34:16.404742   37774 cri.go:89] found id: "725ced7876ed08889d3f74fa5c4c8a33ecd26da44bbc1c0d7ff6b21dc527f663"
	I1028 17:34:16.404751   37774 cri.go:89] found id: "1060913f6886b0b7021792342930ce4fbeb774054258798ad5176a69344123ee"
	I1028 17:34:16.404754   37774 cri.go:89] found id: "4fa4ef36f67f276908cd6a9ae9defd7fe8b1ba8d88506d3320a2613a448a2284"
	I1028 17:34:16.404757   37774 cri.go:89] found id: "da12f85c717594334ae8f6486a0297a2c55f58c4a1f00fde1b6833547e695980"
	I1028 17:34:16.404760   37774 cri.go:89] found id: "3179f8b1830b7354e99efad950aedae97caf639e12dcb0124c5ddc9795338d37"
	I1028 17:34:16.404767   37774 cri.go:89] found id: "d36a9d087e6521b56264baef50be0d64c0e582e8f59495ebbb576fd5c145290b"
	I1028 17:34:16.404771   37774 cri.go:89] found id: "fb3c00b93a7e67ddafbde15e7b13e0e96a31fe47e72b4ad18a1c68d98f64ed30"
	I1028 17:34:16.404776   37774 cri.go:89] found id: "439a12fd4f2e97450fa8c7b6befe3861f9d843fa2e3a213ada2a37911994863f"
	I1028 17:34:16.404780   37774 cri.go:89] found id: "02eaa5b848022deea6049f6a6b1b92a9c0ee9145a1cc54164436a3bc3c70efc3"
	I1028 17:34:16.404784   37774 cri.go:89] found id: "4c2af4b0e8f709531bd7b2e58eccf49c4a7806c51a0d4876374ff2ee3254dafa"
	I1028 17:34:16.404791   37774 cri.go:89] found id: "c4311ab52a43818424abc397049043cb2ee45579e27707de01bd8c82ac34c2b8"
	I1028 17:34:16.404795   37774 cri.go:89] found id: "5d299a6ffacac1658ca993595a1514ca77ba1fc145b2c1f4c520e4cd51effcb9"
	I1028 17:34:16.404800   37774 cri.go:89] found id: "8f6c077dbde89a9527c07dc805e7d825fb73e839eb47fc856d4fbd16911bae9b"
	I1028 17:34:16.404805   37774 cri.go:89] found id: ""
	I1028 17:34:16.404853   37774 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-381619 -n ha-381619
helpers_test.go:261: (dbg) Run:  kubectl --context ha-381619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (329.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-949956
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-949956
E1028 17:55:33.437963   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-949956: exit status 82 (2m1.819096384s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-949956-m03"  ...
	* Stopping node "multinode-949956-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-949956" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=8 --alsologtostderr
E1028 17:58:38.394928   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=8 --alsologtostderr: (3m25.363712304s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-949956
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-949956 -n multinode-949956
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 logs -n 25: (1.969743151s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile997746669/001/cp-test_multinode-949956-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956:/home/docker/cp-test_multinode-949956-m02_multinode-949956.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956 sudo cat                                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m02_multinode-949956.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03:/home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956-m03 sudo cat                                   | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp testdata/cp-test.txt                                                | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile997746669/001/cp-test_multinode-949956-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956:/home/docker/cp-test_multinode-949956-m03_multinode-949956.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956 sudo cat                                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m03_multinode-949956.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02:/home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956-m02 sudo cat                                   | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-949956 node stop m03                                                          | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	| node    | multinode-949956 node start                                                             | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-949956                                                                | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:54 UTC |                     |
	| stop    | -p multinode-949956                                                                     | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:54 UTC |                     |
	| start   | -p multinode-949956                                                                     | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:56 UTC | 28 Oct 24 18:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-949956                                                                | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 18:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:56:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:56:42.795491   49274 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:56:42.795736   49274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:56:42.795745   49274 out.go:358] Setting ErrFile to fd 2...
	I1028 17:56:42.795749   49274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:56:42.795900   49274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:56:42.796406   49274 out.go:352] Setting JSON to false
	I1028 17:56:42.797302   49274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5946,"bootTime":1730132257,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:56:42.797394   49274 start.go:139] virtualization: kvm guest
	I1028 17:56:42.799561   49274 out.go:177] * [multinode-949956] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:56:42.800974   49274 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:56:42.800978   49274 notify.go:220] Checking for updates...
	I1028 17:56:42.803308   49274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:56:42.804570   49274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:56:42.805726   49274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:56:42.806768   49274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:56:42.807913   49274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:56:42.809685   49274 config.go:182] Loaded profile config "multinode-949956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:56:42.809798   49274 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:56:42.810447   49274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:56:42.810505   49274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:56:42.825900   49274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I1028 17:56:42.826355   49274 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:56:42.826938   49274 main.go:141] libmachine: Using API Version  1
	I1028 17:56:42.826957   49274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:56:42.827302   49274 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:56:42.827507   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:56:42.861296   49274 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 17:56:42.862509   49274 start.go:297] selected driver: kvm2
	I1028 17:56:42.862523   49274 start.go:901] validating driver "kvm2" against &{Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:56:42.862694   49274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:56:42.863012   49274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:56:42.863081   49274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:56:42.876698   49274 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:56:42.877437   49274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:56:42.877465   49274 cni.go:84] Creating CNI manager for ""
	I1028 17:56:42.877525   49274 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 17:56:42.877579   49274 start.go:340] cluster config:
	{Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-949956 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:56:42.877715   49274 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:56:42.879466   49274 out.go:177] * Starting "multinode-949956" primary control-plane node in "multinode-949956" cluster
	I1028 17:56:42.880789   49274 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:56:42.880820   49274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:56:42.880830   49274 cache.go:56] Caching tarball of preloaded images
	I1028 17:56:42.880909   49274 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:56:42.880922   49274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:56:42.881029   49274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/config.json ...
	I1028 17:56:42.881214   49274 start.go:360] acquireMachinesLock for multinode-949956: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:56:42.881268   49274 start.go:364] duration metric: took 37.034µs to acquireMachinesLock for "multinode-949956"
	I1028 17:56:42.881281   49274 start.go:96] Skipping create...Using existing machine configuration
	I1028 17:56:42.881288   49274 fix.go:54] fixHost starting: 
	I1028 17:56:42.881572   49274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:56:42.881634   49274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:56:42.895108   49274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1028 17:56:42.895455   49274 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:56:42.895935   49274 main.go:141] libmachine: Using API Version  1
	I1028 17:56:42.895954   49274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:56:42.896270   49274 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:56:42.896422   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:56:42.896573   49274 main.go:141] libmachine: (multinode-949956) Calling .GetState
	I1028 17:56:42.898001   49274 fix.go:112] recreateIfNeeded on multinode-949956: state=Running err=<nil>
	W1028 17:56:42.898044   49274 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 17:56:42.899815   49274 out.go:177] * Updating the running kvm2 "multinode-949956" VM ...
	I1028 17:56:42.901034   49274 machine.go:93] provisionDockerMachine start ...
	I1028 17:56:42.901053   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:56:42.901258   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:42.903650   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:42.904098   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:42.904125   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:42.904201   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:42.904360   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:42.904508   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:42.904627   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:42.904761   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:42.904941   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:42.904952   49274 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 17:56:43.005321   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-949956
	
	I1028 17:56:43.005351   49274 main.go:141] libmachine: (multinode-949956) Calling .GetMachineName
	I1028 17:56:43.005570   49274 buildroot.go:166] provisioning hostname "multinode-949956"
	I1028 17:56:43.005592   49274 main.go:141] libmachine: (multinode-949956) Calling .GetMachineName
	I1028 17:56:43.005797   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.008187   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.008628   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.008653   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.008734   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.008885   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.009000   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.009104   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.009248   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:43.009443   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:43.009455   49274 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-949956 && echo "multinode-949956" | sudo tee /etc/hostname
	I1028 17:56:43.125205   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-949956
	
	I1028 17:56:43.125229   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.128048   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.128440   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.128502   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.128690   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.128872   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.128999   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.129143   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.129310   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:43.129470   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:43.129485   49274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-949956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-949956/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-949956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:56:43.225118   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:56:43.225148   49274 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:56:43.225165   49274 buildroot.go:174] setting up certificates
	I1028 17:56:43.225174   49274 provision.go:84] configureAuth start
	I1028 17:56:43.225182   49274 main.go:141] libmachine: (multinode-949956) Calling .GetMachineName
	I1028 17:56:43.225411   49274 main.go:141] libmachine: (multinode-949956) Calling .GetIP
	I1028 17:56:43.227730   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.228085   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.228114   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.228234   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.230320   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.230662   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.230692   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.230781   49274 provision.go:143] copyHostCerts
	I1028 17:56:43.230810   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:56:43.230860   49274 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:56:43.230876   49274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:56:43.230959   49274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:56:43.231042   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:56:43.231066   49274 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:56:43.231071   49274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:56:43.231104   49274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:56:43.231159   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:56:43.231185   49274 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:56:43.231194   49274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:56:43.231232   49274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:56:43.231305   49274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.multinode-949956 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-949956]
	I1028 17:56:43.588931   49274 provision.go:177] copyRemoteCerts
	I1028 17:56:43.589010   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:56:43.589037   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.591865   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.592239   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.592277   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.592511   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.592705   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.592848   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.592979   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:56:43.671075   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:56:43.671134   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:56:43.695989   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:56:43.696058   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 17:56:43.719491   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:56:43.719553   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:56:43.743666   49274 provision.go:87] duration metric: took 518.481902ms to configureAuth
	I1028 17:56:43.743691   49274 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:56:43.743886   49274 config.go:182] Loaded profile config "multinode-949956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:56:43.743954   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.746536   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.746843   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.746871   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.746995   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.747164   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.747321   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.747486   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.747665   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:43.747820   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:43.747833   49274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:58:14.326687   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:58:14.326749   49274 machine.go:96] duration metric: took 1m31.425701049s to provisionDockerMachine
	I1028 17:58:14.326772   49274 start.go:293] postStartSetup for "multinode-949956" (driver="kvm2")
	I1028 17:58:14.326795   49274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:58:14.326823   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.327191   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:58:14.327236   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.330177   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.330690   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.330714   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.330859   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.331027   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.331165   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.331310   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:58:14.411915   49274 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:58:14.416035   49274 command_runner.go:130] > NAME=Buildroot
	I1028 17:58:14.416056   49274 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 17:58:14.416063   49274 command_runner.go:130] > ID=buildroot
	I1028 17:58:14.416071   49274 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 17:58:14.416079   49274 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 17:58:14.416114   49274 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:58:14.416134   49274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:58:14.416221   49274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:58:14.416313   49274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:58:14.416334   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:58:14.416439   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:58:14.425832   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:58:14.448747   49274 start.go:296] duration metric: took 121.964321ms for postStartSetup
	I1028 17:58:14.448817   49274 fix.go:56] duration metric: took 1m31.567527114s for fixHost
	I1028 17:58:14.448850   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.451332   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.451767   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.451795   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.451941   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.452094   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.452238   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.452341   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.452501   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:58:14.452653   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:58:14.452663   49274 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:58:14.549104   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730138294.534079014
	
	I1028 17:58:14.549129   49274 fix.go:216] guest clock: 1730138294.534079014
	I1028 17:58:14.549138   49274 fix.go:229] Guest: 2024-10-28 17:58:14.534079014 +0000 UTC Remote: 2024-10-28 17:58:14.448828065 +0000 UTC m=+91.689815453 (delta=85.250949ms)
	I1028 17:58:14.549186   49274 fix.go:200] guest clock delta is within tolerance: 85.250949ms
	I1028 17:58:14.549196   49274 start.go:83] releasing machines lock for "multinode-949956", held for 1m31.667918735s
	I1028 17:58:14.549229   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.549482   49274 main.go:141] libmachine: (multinode-949956) Calling .GetIP
	I1028 17:58:14.551904   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.552196   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.552224   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.552371   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.552816   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.552977   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.553068   49274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:58:14.553111   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.553184   49274 ssh_runner.go:195] Run: cat /version.json
	I1028 17:58:14.553204   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.555495   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.555774   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.555802   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.555824   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.555983   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.556133   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.556220   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.556244   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.556278   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.556372   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.556428   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:58:14.556596   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.556711   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.556856   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:58:14.628748   49274 command_runner.go:130] > {"iso_version": "v1.34.0-1730109979-19872", "kicbase_version": "v0.0.45-1729876044-19868", "minikube_version": "v1.34.0", "commit": "3cd67be5b3d326faa45da4684b85954cdc5db093"}
	I1028 17:58:14.629039   49274 ssh_runner.go:195] Run: systemctl --version
	I1028 17:58:14.653472   49274 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1028 17:58:14.654096   49274 command_runner.go:130] > systemd 252 (252)
	I1028 17:58:14.654140   49274 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 17:58:14.654204   49274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:58:14.812857   49274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 17:58:14.819655   49274 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 17:58:14.819685   49274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:58:14.819728   49274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:58:14.828600   49274 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 17:58:14.828617   49274 start.go:495] detecting cgroup driver to use...
	I1028 17:58:14.828692   49274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:58:14.844051   49274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:58:14.858251   49274 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:58:14.858302   49274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:58:14.871660   49274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:58:14.884720   49274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:58:15.031965   49274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:58:15.188614   49274 docker.go:233] disabling docker service ...
	I1028 17:58:15.188690   49274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:58:15.206098   49274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:58:15.219926   49274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:58:15.373248   49274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:58:15.513548   49274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:58:15.526847   49274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:58:15.545832   49274 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1028 17:58:15.546310   49274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:58:15.546378   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.556616   49274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:58:15.556667   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.566432   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.576318   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.586186   49274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:58:15.596312   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.607150   49274 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.618446   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.628434   49274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:58:15.637274   49274 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 17:58:15.637316   49274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:58:15.646164   49274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:58:15.784292   49274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:58:18.911346   49274 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.127019951s)
	I1028 17:58:18.911370   49274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:58:18.911408   49274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:58:18.916448   49274 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1028 17:58:18.916474   49274 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 17:58:18.916484   49274 command_runner.go:130] > Device: 0,22	Inode: 1259        Links: 1
	I1028 17:58:18.916494   49274 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 17:58:18.916502   49274 command_runner.go:130] > Access: 2024-10-28 17:58:18.795263900 +0000
	I1028 17:58:18.916514   49274 command_runner.go:130] > Modify: 2024-10-28 17:58:18.795263900 +0000
	I1028 17:58:18.916521   49274 command_runner.go:130] > Change: 2024-10-28 17:58:18.795263900 +0000
	I1028 17:58:18.916531   49274 command_runner.go:130] >  Birth: -
	I1028 17:58:18.916668   49274 start.go:563] Will wait 60s for crictl version
	I1028 17:58:18.916709   49274 ssh_runner.go:195] Run: which crictl
	I1028 17:58:18.920316   49274 command_runner.go:130] > /usr/bin/crictl
	I1028 17:58:18.920515   49274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:58:18.957350   49274 command_runner.go:130] > Version:  0.1.0
	I1028 17:58:18.957372   49274 command_runner.go:130] > RuntimeName:  cri-o
	I1028 17:58:18.957377   49274 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1028 17:58:18.957382   49274 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 17:58:18.958238   49274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:58:18.958320   49274 ssh_runner.go:195] Run: crio --version
	I1028 17:58:18.984782   49274 command_runner.go:130] > crio version 1.29.1
	I1028 17:58:18.984799   49274 command_runner.go:130] > Version:        1.29.1
	I1028 17:58:18.984818   49274 command_runner.go:130] > GitCommit:      unknown
	I1028 17:58:18.984823   49274 command_runner.go:130] > GitCommitDate:  unknown
	I1028 17:58:18.984830   49274 command_runner.go:130] > GitTreeState:   clean
	I1028 17:58:18.984837   49274 command_runner.go:130] > BuildDate:      2024-10-28T15:50:52Z
	I1028 17:58:18.984844   49274 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 17:58:18.984851   49274 command_runner.go:130] > Compiler:       gc
	I1028 17:58:18.984861   49274 command_runner.go:130] > Platform:       linux/amd64
	I1028 17:58:18.984875   49274 command_runner.go:130] > Linkmode:       dynamic
	I1028 17:58:18.984885   49274 command_runner.go:130] > BuildTags:      
	I1028 17:58:18.984889   49274 command_runner.go:130] >   containers_image_ostree_stub
	I1028 17:58:18.984896   49274 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 17:58:18.984900   49274 command_runner.go:130] >   btrfs_noversion
	I1028 17:58:18.984905   49274 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 17:58:18.984910   49274 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 17:58:18.984913   49274 command_runner.go:130] >   seccomp
	I1028 17:58:18.984917   49274 command_runner.go:130] > LDFlags:          unknown
	I1028 17:58:18.984922   49274 command_runner.go:130] > SeccompEnabled:   true
	I1028 17:58:18.984926   49274 command_runner.go:130] > AppArmorEnabled:  false
	I1028 17:58:18.985951   49274 ssh_runner.go:195] Run: crio --version
	I1028 17:58:19.012146   49274 command_runner.go:130] > crio version 1.29.1
	I1028 17:58:19.012164   49274 command_runner.go:130] > Version:        1.29.1
	I1028 17:58:19.012171   49274 command_runner.go:130] > GitCommit:      unknown
	I1028 17:58:19.012179   49274 command_runner.go:130] > GitCommitDate:  unknown
	I1028 17:58:19.012184   49274 command_runner.go:130] > GitTreeState:   clean
	I1028 17:58:19.012193   49274 command_runner.go:130] > BuildDate:      2024-10-28T15:50:52Z
	I1028 17:58:19.012203   49274 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 17:58:19.012209   49274 command_runner.go:130] > Compiler:       gc
	I1028 17:58:19.012218   49274 command_runner.go:130] > Platform:       linux/amd64
	I1028 17:58:19.012225   49274 command_runner.go:130] > Linkmode:       dynamic
	I1028 17:58:19.012246   49274 command_runner.go:130] > BuildTags:      
	I1028 17:58:19.012256   49274 command_runner.go:130] >   containers_image_ostree_stub
	I1028 17:58:19.012263   49274 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 17:58:19.012269   49274 command_runner.go:130] >   btrfs_noversion
	I1028 17:58:19.012279   49274 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 17:58:19.012287   49274 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 17:58:19.012293   49274 command_runner.go:130] >   seccomp
	I1028 17:58:19.012300   49274 command_runner.go:130] > LDFlags:          unknown
	I1028 17:58:19.012308   49274 command_runner.go:130] > SeccompEnabled:   true
	I1028 17:58:19.012314   49274 command_runner.go:130] > AppArmorEnabled:  false
	I1028 17:58:19.015354   49274 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:58:19.016678   49274 main.go:141] libmachine: (multinode-949956) Calling .GetIP
	I1028 17:58:19.019118   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:19.019468   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:19.019486   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:19.019730   49274 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:58:19.023902   49274 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1028 17:58:19.024035   49274 kubeadm.go:883] updating cluster {Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:58:19.024183   49274 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:58:19.024222   49274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:58:19.068321   49274 command_runner.go:130] > {
	I1028 17:58:19.068346   49274 command_runner.go:130] >   "images": [
	I1028 17:58:19.068350   49274 command_runner.go:130] >     {
	I1028 17:58:19.068359   49274 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 17:58:19.068363   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068373   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 17:58:19.068379   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068386   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068397   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 17:58:19.068412   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 17:58:19.068419   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068424   49274 command_runner.go:130] >       "size": "94965812",
	I1028 17:58:19.068430   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068434   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068443   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068449   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068453   49274 command_runner.go:130] >     },
	I1028 17:58:19.068459   49274 command_runner.go:130] >     {
	I1028 17:58:19.068465   49274 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 17:58:19.068485   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068493   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 17:58:19.068502   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068509   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068520   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 17:58:19.068529   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 17:58:19.068533   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068537   49274 command_runner.go:130] >       "size": "1363676",
	I1028 17:58:19.068541   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068550   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068559   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068565   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068575   49274 command_runner.go:130] >     },
	I1028 17:58:19.068581   49274 command_runner.go:130] >     {
	I1028 17:58:19.068591   49274 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 17:58:19.068603   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068613   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 17:58:19.068621   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068629   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068643   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 17:58:19.068660   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 17:58:19.068669   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068676   49274 command_runner.go:130] >       "size": "31470524",
	I1028 17:58:19.068685   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068693   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068700   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068705   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068709   49274 command_runner.go:130] >     },
	I1028 17:58:19.068713   49274 command_runner.go:130] >     {
	I1028 17:58:19.068725   49274 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 17:58:19.068734   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068743   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 17:58:19.068752   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068761   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068775   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 17:58:19.068793   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 17:58:19.068799   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068804   49274 command_runner.go:130] >       "size": "63273227",
	I1028 17:58:19.068814   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068825   49274 command_runner.go:130] >       "username": "nonroot",
	I1028 17:58:19.068834   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068843   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068850   49274 command_runner.go:130] >     },
	I1028 17:58:19.068859   49274 command_runner.go:130] >     {
	I1028 17:58:19.068871   49274 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 17:58:19.068879   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068885   49274 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 17:58:19.068893   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068902   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068916   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 17:58:19.068930   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 17:58:19.068938   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068945   49274 command_runner.go:130] >       "size": "149009664",
	I1028 17:58:19.068953   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.068962   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.068968   49274 command_runner.go:130] >       },
	I1028 17:58:19.068974   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068983   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069011   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069021   49274 command_runner.go:130] >     },
	I1028 17:58:19.069026   49274 command_runner.go:130] >     {
	I1028 17:58:19.069036   49274 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 17:58:19.069045   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069054   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 17:58:19.069062   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069072   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069086   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 17:58:19.069100   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 17:58:19.069109   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069118   49274 command_runner.go:130] >       "size": "95274464",
	I1028 17:58:19.069124   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069133   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.069140   49274 command_runner.go:130] >       },
	I1028 17:58:19.069144   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069152   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069162   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069171   49274 command_runner.go:130] >     },
	I1028 17:58:19.069179   49274 command_runner.go:130] >     {
	I1028 17:58:19.069188   49274 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 17:58:19.069197   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069209   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 17:58:19.069218   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069225   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069235   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 17:58:19.069249   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 17:58:19.069259   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069266   49274 command_runner.go:130] >       "size": "89474374",
	I1028 17:58:19.069275   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069283   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.069292   49274 command_runner.go:130] >       },
	I1028 17:58:19.069301   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069307   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069313   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069317   49274 command_runner.go:130] >     },
	I1028 17:58:19.069325   49274 command_runner.go:130] >     {
	I1028 17:58:19.069339   49274 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 17:58:19.069348   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069359   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 17:58:19.069367   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069376   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069397   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 17:58:19.069411   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 17:58:19.069420   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069430   49274 command_runner.go:130] >       "size": "92783513",
	I1028 17:58:19.069440   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.069446   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069453   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069459   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069465   49274 command_runner.go:130] >     },
	I1028 17:58:19.069470   49274 command_runner.go:130] >     {
	I1028 17:58:19.069478   49274 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 17:58:19.069485   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069494   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 17:58:19.069502   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069514   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069527   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 17:58:19.069542   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 17:58:19.069550   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069559   49274 command_runner.go:130] >       "size": "68457798",
	I1028 17:58:19.069566   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069570   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.069574   49274 command_runner.go:130] >       },
	I1028 17:58:19.069583   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069593   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069603   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069611   49274 command_runner.go:130] >     },
	I1028 17:58:19.069619   49274 command_runner.go:130] >     {
	I1028 17:58:19.069631   49274 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 17:58:19.069641   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069650   49274 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 17:58:19.069654   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069662   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069678   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 17:58:19.069692   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 17:58:19.069700   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069709   49274 command_runner.go:130] >       "size": "742080",
	I1028 17:58:19.069718   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069727   49274 command_runner.go:130] >         "value": "65535"
	I1028 17:58:19.069734   49274 command_runner.go:130] >       },
	I1028 17:58:19.069738   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069742   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069751   49274 command_runner.go:130] >       "pinned": true
	I1028 17:58:19.069759   49274 command_runner.go:130] >     }
	I1028 17:58:19.069767   49274 command_runner.go:130] >   ]
	I1028 17:58:19.069773   49274 command_runner.go:130] > }
	I1028 17:58:19.069985   49274 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:58:19.070006   49274 crio.go:433] Images already preloaded, skipping extraction
	I1028 17:58:19.070071   49274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:58:19.102247   49274 command_runner.go:130] > {
	I1028 17:58:19.102269   49274 command_runner.go:130] >   "images": [
	I1028 17:58:19.102276   49274 command_runner.go:130] >     {
	I1028 17:58:19.102288   49274 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 17:58:19.102295   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102310   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 17:58:19.102316   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102320   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102329   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 17:58:19.102336   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 17:58:19.102343   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102348   49274 command_runner.go:130] >       "size": "94965812",
	I1028 17:58:19.102352   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102363   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102373   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102387   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102393   49274 command_runner.go:130] >     },
	I1028 17:58:19.102399   49274 command_runner.go:130] >     {
	I1028 17:58:19.102410   49274 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 17:58:19.102415   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102421   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 17:58:19.102425   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102441   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102453   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 17:58:19.102468   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 17:58:19.102477   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102487   49274 command_runner.go:130] >       "size": "1363676",
	I1028 17:58:19.102496   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102508   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102515   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102520   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102526   49274 command_runner.go:130] >     },
	I1028 17:58:19.102530   49274 command_runner.go:130] >     {
	I1028 17:58:19.102540   49274 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 17:58:19.102550   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102562   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 17:58:19.102570   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102579   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102594   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 17:58:19.102608   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 17:58:19.102615   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102619   49274 command_runner.go:130] >       "size": "31470524",
	I1028 17:58:19.102627   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102636   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102646   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102656   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102664   49274 command_runner.go:130] >     },
	I1028 17:58:19.102673   49274 command_runner.go:130] >     {
	I1028 17:58:19.102686   49274 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 17:58:19.102694   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102700   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 17:58:19.102706   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102713   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102728   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 17:58:19.102746   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 17:58:19.102757   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102766   49274 command_runner.go:130] >       "size": "63273227",
	I1028 17:58:19.102776   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102784   49274 command_runner.go:130] >       "username": "nonroot",
	I1028 17:58:19.102791   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102797   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102807   49274 command_runner.go:130] >     },
	I1028 17:58:19.102815   49274 command_runner.go:130] >     {
	I1028 17:58:19.102828   49274 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 17:58:19.102838   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102848   49274 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 17:58:19.102857   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102864   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102874   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 17:58:19.102887   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 17:58:19.102896   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102907   49274 command_runner.go:130] >       "size": "149009664",
	I1028 17:58:19.102916   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.102925   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.102931   49274 command_runner.go:130] >       },
	I1028 17:58:19.102940   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102947   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102954   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102959   49274 command_runner.go:130] >     },
	I1028 17:58:19.102964   49274 command_runner.go:130] >     {
	I1028 17:58:19.103007   49274 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 17:58:19.103024   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103033   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 17:58:19.103040   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103046   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103061   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 17:58:19.103075   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 17:58:19.103081   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103090   49274 command_runner.go:130] >       "size": "95274464",
	I1028 17:58:19.103099   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103106   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.103114   49274 command_runner.go:130] >       },
	I1028 17:58:19.103121   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103130   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103135   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103144   49274 command_runner.go:130] >     },
	I1028 17:58:19.103149   49274 command_runner.go:130] >     {
	I1028 17:58:19.103163   49274 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 17:58:19.103172   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103181   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 17:58:19.103190   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103199   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103213   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 17:58:19.103223   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 17:58:19.103232   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103242   49274 command_runner.go:130] >       "size": "89474374",
	I1028 17:58:19.103251   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103260   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.103269   49274 command_runner.go:130] >       },
	I1028 17:58:19.103278   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103287   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103296   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103303   49274 command_runner.go:130] >     },
	I1028 17:58:19.103306   49274 command_runner.go:130] >     {
	I1028 17:58:19.103318   49274 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 17:58:19.103328   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103339   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 17:58:19.103348   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103356   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103377   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 17:58:19.103389   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 17:58:19.103396   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103405   49274 command_runner.go:130] >       "size": "92783513",
	I1028 17:58:19.103414   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.103424   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103433   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103439   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103447   49274 command_runner.go:130] >     },
	I1028 17:58:19.103453   49274 command_runner.go:130] >     {
	I1028 17:58:19.103465   49274 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 17:58:19.103473   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103478   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 17:58:19.103485   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103495   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103510   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 17:58:19.103524   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 17:58:19.103532   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103541   49274 command_runner.go:130] >       "size": "68457798",
	I1028 17:58:19.103550   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103558   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.103562   49274 command_runner.go:130] >       },
	I1028 17:58:19.103568   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103577   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103587   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103595   49274 command_runner.go:130] >     },
	I1028 17:58:19.103604   49274 command_runner.go:130] >     {
	I1028 17:58:19.103614   49274 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 17:58:19.103623   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103633   49274 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 17:58:19.103641   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103646   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103658   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 17:58:19.103673   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 17:58:19.103683   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103695   49274 command_runner.go:130] >       "size": "742080",
	I1028 17:58:19.103704   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103714   49274 command_runner.go:130] >         "value": "65535"
	I1028 17:58:19.103722   49274 command_runner.go:130] >       },
	I1028 17:58:19.103728   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103732   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103740   49274 command_runner.go:130] >       "pinned": true
	I1028 17:58:19.103748   49274 command_runner.go:130] >     }
	I1028 17:58:19.103757   49274 command_runner.go:130] >   ]
	I1028 17:58:19.103765   49274 command_runner.go:130] > }
	I1028 17:58:19.103915   49274 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:58:19.103929   49274 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:58:19.103943   49274 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.2 crio true true} ...
	I1028 17:58:19.104089   49274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-949956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:58:19.104181   49274 ssh_runner.go:195] Run: crio config
	I1028 17:58:19.139504   49274 command_runner.go:130] ! time="2024-10-28 17:58:19.124706900Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1028 17:58:19.146145   49274 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1028 17:58:19.156713   49274 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1028 17:58:19.156739   49274 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1028 17:58:19.156750   49274 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1028 17:58:19.156755   49274 command_runner.go:130] > #
	I1028 17:58:19.156767   49274 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1028 17:58:19.156781   49274 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1028 17:58:19.156790   49274 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1028 17:58:19.156835   49274 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1028 17:58:19.156847   49274 command_runner.go:130] > # reload'.
	I1028 17:58:19.156856   49274 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1028 17:58:19.156866   49274 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1028 17:58:19.156876   49274 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1028 17:58:19.156885   49274 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1028 17:58:19.156891   49274 command_runner.go:130] > [crio]
	I1028 17:58:19.156901   49274 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1028 17:58:19.156912   49274 command_runner.go:130] > # containers images, in this directory.
	I1028 17:58:19.156919   49274 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1028 17:58:19.156932   49274 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1028 17:58:19.156944   49274 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1028 17:58:19.156957   49274 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1028 17:58:19.156964   49274 command_runner.go:130] > # imagestore = ""
	I1028 17:58:19.156974   49274 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1028 17:58:19.156983   49274 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1028 17:58:19.156991   49274 command_runner.go:130] > storage_driver = "overlay"
	I1028 17:58:19.157001   49274 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1028 17:58:19.157014   49274 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1028 17:58:19.157021   49274 command_runner.go:130] > storage_option = [
	I1028 17:58:19.157032   49274 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1028 17:58:19.157040   49274 command_runner.go:130] > ]
	I1028 17:58:19.157049   49274 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1028 17:58:19.157058   49274 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1028 17:58:19.157063   49274 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1028 17:58:19.157071   49274 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1028 17:58:19.157077   49274 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1028 17:58:19.157081   49274 command_runner.go:130] > # always happen on a node reboot
	I1028 17:58:19.157087   49274 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1028 17:58:19.157098   49274 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1028 17:58:19.157106   49274 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1028 17:58:19.157114   49274 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1028 17:58:19.157121   49274 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1028 17:58:19.157131   49274 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1028 17:58:19.157141   49274 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1028 17:58:19.157147   49274 command_runner.go:130] > # internal_wipe = true
	I1028 17:58:19.157155   49274 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1028 17:58:19.157163   49274 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1028 17:58:19.157167   49274 command_runner.go:130] > # internal_repair = false
	I1028 17:58:19.157175   49274 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1028 17:58:19.157181   49274 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1028 17:58:19.157186   49274 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1028 17:58:19.157192   49274 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1028 17:58:19.157199   49274 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1028 17:58:19.157207   49274 command_runner.go:130] > [crio.api]
	I1028 17:58:19.157212   49274 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1028 17:58:19.157219   49274 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1028 17:58:19.157225   49274 command_runner.go:130] > # IP address on which the stream server will listen.
	I1028 17:58:19.157232   49274 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1028 17:58:19.157239   49274 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1028 17:58:19.157246   49274 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1028 17:58:19.157250   49274 command_runner.go:130] > # stream_port = "0"
	I1028 17:58:19.157260   49274 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1028 17:58:19.157270   49274 command_runner.go:130] > # stream_enable_tls = false
	I1028 17:58:19.157279   49274 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1028 17:58:19.157289   49274 command_runner.go:130] > # stream_idle_timeout = ""
	I1028 17:58:19.157298   49274 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1028 17:58:19.157311   49274 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1028 17:58:19.157319   49274 command_runner.go:130] > # minutes.
	I1028 17:58:19.157326   49274 command_runner.go:130] > # stream_tls_cert = ""
	I1028 17:58:19.157343   49274 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1028 17:58:19.157356   49274 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1028 17:58:19.157364   49274 command_runner.go:130] > # stream_tls_key = ""
	I1028 17:58:19.157370   49274 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1028 17:58:19.157377   49274 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1028 17:58:19.157392   49274 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1028 17:58:19.157398   49274 command_runner.go:130] > # stream_tls_ca = ""
	I1028 17:58:19.157406   49274 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 17:58:19.157412   49274 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1028 17:58:19.157420   49274 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 17:58:19.157427   49274 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1028 17:58:19.157433   49274 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1028 17:58:19.157441   49274 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1028 17:58:19.157445   49274 command_runner.go:130] > [crio.runtime]
	I1028 17:58:19.157451   49274 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1028 17:58:19.157457   49274 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1028 17:58:19.157463   49274 command_runner.go:130] > # "nofile=1024:2048"
	I1028 17:58:19.157469   49274 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1028 17:58:19.157476   49274 command_runner.go:130] > # default_ulimits = [
	I1028 17:58:19.157479   49274 command_runner.go:130] > # ]
	I1028 17:58:19.157485   49274 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1028 17:58:19.157489   49274 command_runner.go:130] > # no_pivot = false
	I1028 17:58:19.157494   49274 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1028 17:58:19.157502   49274 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1028 17:58:19.157507   49274 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1028 17:58:19.157515   49274 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1028 17:58:19.157520   49274 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1028 17:58:19.157527   49274 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 17:58:19.157533   49274 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1028 17:58:19.157537   49274 command_runner.go:130] > # Cgroup setting for conmon
	I1028 17:58:19.157544   49274 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1028 17:58:19.157550   49274 command_runner.go:130] > conmon_cgroup = "pod"
	I1028 17:58:19.157556   49274 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1028 17:58:19.157563   49274 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1028 17:58:19.157570   49274 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 17:58:19.157576   49274 command_runner.go:130] > conmon_env = [
	I1028 17:58:19.157582   49274 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 17:58:19.157585   49274 command_runner.go:130] > ]
	I1028 17:58:19.157590   49274 command_runner.go:130] > # Additional environment variables to set for all the
	I1028 17:58:19.157597   49274 command_runner.go:130] > # containers. These are overridden if set in the
	I1028 17:58:19.157602   49274 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1028 17:58:19.157608   49274 command_runner.go:130] > # default_env = [
	I1028 17:58:19.157611   49274 command_runner.go:130] > # ]
	I1028 17:58:19.157617   49274 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1028 17:58:19.157627   49274 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1028 17:58:19.157631   49274 command_runner.go:130] > # selinux = false
	I1028 17:58:19.157637   49274 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1028 17:58:19.157645   49274 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1028 17:58:19.157651   49274 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1028 17:58:19.157657   49274 command_runner.go:130] > # seccomp_profile = ""
	I1028 17:58:19.157663   49274 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1028 17:58:19.157671   49274 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1028 17:58:19.157677   49274 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1028 17:58:19.157683   49274 command_runner.go:130] > # which might increase security.
	I1028 17:58:19.157687   49274 command_runner.go:130] > # This option is currently deprecated,
	I1028 17:58:19.157695   49274 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1028 17:58:19.157699   49274 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1028 17:58:19.157709   49274 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1028 17:58:19.157715   49274 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1028 17:58:19.157723   49274 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1028 17:58:19.157729   49274 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1028 17:58:19.157736   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.157741   49274 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1028 17:58:19.157748   49274 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1028 17:58:19.157753   49274 command_runner.go:130] > # the cgroup blockio controller.
	I1028 17:58:19.157759   49274 command_runner.go:130] > # blockio_config_file = ""
	I1028 17:58:19.157766   49274 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1028 17:58:19.157772   49274 command_runner.go:130] > # blockio parameters.
	I1028 17:58:19.157775   49274 command_runner.go:130] > # blockio_reload = false
	I1028 17:58:19.157782   49274 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1028 17:58:19.157788   49274 command_runner.go:130] > # irqbalance daemon.
	I1028 17:58:19.157793   49274 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1028 17:58:19.157800   49274 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1028 17:58:19.157808   49274 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1028 17:58:19.157816   49274 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1028 17:58:19.157821   49274 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1028 17:58:19.157829   49274 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1028 17:58:19.157835   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.157841   49274 command_runner.go:130] > # rdt_config_file = ""
	I1028 17:58:19.157846   49274 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1028 17:58:19.157851   49274 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1028 17:58:19.157866   49274 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1028 17:58:19.157873   49274 command_runner.go:130] > # separate_pull_cgroup = ""
	I1028 17:58:19.157879   49274 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1028 17:58:19.157888   49274 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1028 17:58:19.157892   49274 command_runner.go:130] > # will be added.
	I1028 17:58:19.157896   49274 command_runner.go:130] > # default_capabilities = [
	I1028 17:58:19.157900   49274 command_runner.go:130] > # 	"CHOWN",
	I1028 17:58:19.157904   49274 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1028 17:58:19.157908   49274 command_runner.go:130] > # 	"FSETID",
	I1028 17:58:19.157912   49274 command_runner.go:130] > # 	"FOWNER",
	I1028 17:58:19.157918   49274 command_runner.go:130] > # 	"SETGID",
	I1028 17:58:19.157921   49274 command_runner.go:130] > # 	"SETUID",
	I1028 17:58:19.157928   49274 command_runner.go:130] > # 	"SETPCAP",
	I1028 17:58:19.157932   49274 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1028 17:58:19.157937   49274 command_runner.go:130] > # 	"KILL",
	I1028 17:58:19.157941   49274 command_runner.go:130] > # ]
	I1028 17:58:19.157947   49274 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1028 17:58:19.157956   49274 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1028 17:58:19.157960   49274 command_runner.go:130] > # add_inheritable_capabilities = false
	I1028 17:58:19.157969   49274 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1028 17:58:19.157975   49274 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 17:58:19.157981   49274 command_runner.go:130] > default_sysctls = [
	I1028 17:58:19.157985   49274 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1028 17:58:19.157989   49274 command_runner.go:130] > ]
	I1028 17:58:19.157994   49274 command_runner.go:130] > # List of devices on the host that a
	I1028 17:58:19.158002   49274 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1028 17:58:19.158006   49274 command_runner.go:130] > # allowed_devices = [
	I1028 17:58:19.158010   49274 command_runner.go:130] > # 	"/dev/fuse",
	I1028 17:58:19.158013   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158018   49274 command_runner.go:130] > # List of additional devices. specified as
	I1028 17:58:19.158027   49274 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1028 17:58:19.158032   49274 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1028 17:58:19.158040   49274 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 17:58:19.158045   49274 command_runner.go:130] > # additional_devices = [
	I1028 17:58:19.158048   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158053   49274 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1028 17:58:19.158059   49274 command_runner.go:130] > # cdi_spec_dirs = [
	I1028 17:58:19.158063   49274 command_runner.go:130] > # 	"/etc/cdi",
	I1028 17:58:19.158066   49274 command_runner.go:130] > # 	"/var/run/cdi",
	I1028 17:58:19.158071   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158077   49274 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1028 17:58:19.158083   49274 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1028 17:58:19.158089   49274 command_runner.go:130] > # Defaults to false.
	I1028 17:58:19.158094   49274 command_runner.go:130] > # device_ownership_from_security_context = false
	I1028 17:58:19.158100   49274 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1028 17:58:19.158108   49274 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1028 17:58:19.158112   49274 command_runner.go:130] > # hooks_dir = [
	I1028 17:58:19.158116   49274 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1028 17:58:19.158121   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158127   49274 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1028 17:58:19.158133   49274 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1028 17:58:19.158138   49274 command_runner.go:130] > # its default mounts from the following two files:
	I1028 17:58:19.158143   49274 command_runner.go:130] > #
	I1028 17:58:19.158149   49274 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1028 17:58:19.158157   49274 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1028 17:58:19.158163   49274 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1028 17:58:19.158168   49274 command_runner.go:130] > #
	I1028 17:58:19.158173   49274 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1028 17:58:19.158182   49274 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1028 17:58:19.158188   49274 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1028 17:58:19.158195   49274 command_runner.go:130] > #      only add mounts it finds in this file.
	I1028 17:58:19.158199   49274 command_runner.go:130] > #
	I1028 17:58:19.158203   49274 command_runner.go:130] > # default_mounts_file = ""
	I1028 17:58:19.158208   49274 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1028 17:58:19.158215   49274 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1028 17:58:19.158221   49274 command_runner.go:130] > pids_limit = 1024
	I1028 17:58:19.158228   49274 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1028 17:58:19.158233   49274 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1028 17:58:19.158241   49274 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1028 17:58:19.158251   49274 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1028 17:58:19.158259   49274 command_runner.go:130] > # log_size_max = -1
	I1028 17:58:19.158270   49274 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1028 17:58:19.158280   49274 command_runner.go:130] > # log_to_journald = false
	I1028 17:58:19.158289   49274 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1028 17:58:19.158300   49274 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1028 17:58:19.158311   49274 command_runner.go:130] > # Path to directory for container attach sockets.
	I1028 17:58:19.158322   49274 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1028 17:58:19.158329   49274 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1028 17:58:19.158340   49274 command_runner.go:130] > # bind_mount_prefix = ""
	I1028 17:58:19.158348   49274 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1028 17:58:19.158352   49274 command_runner.go:130] > # read_only = false
	I1028 17:58:19.158358   49274 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1028 17:58:19.158367   49274 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1028 17:58:19.158371   49274 command_runner.go:130] > # live configuration reload.
	I1028 17:58:19.158376   49274 command_runner.go:130] > # log_level = "info"
	I1028 17:58:19.158385   49274 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1028 17:58:19.158390   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.158396   49274 command_runner.go:130] > # log_filter = ""
	I1028 17:58:19.158402   49274 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1028 17:58:19.158412   49274 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1028 17:58:19.158417   49274 command_runner.go:130] > # separated by comma.
	I1028 17:58:19.158424   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158430   49274 command_runner.go:130] > # uid_mappings = ""
	I1028 17:58:19.158436   49274 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1028 17:58:19.158444   49274 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1028 17:58:19.158449   49274 command_runner.go:130] > # separated by comma.
	I1028 17:58:19.158458   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158462   49274 command_runner.go:130] > # gid_mappings = ""
	I1028 17:58:19.158469   49274 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1028 17:58:19.158478   49274 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 17:58:19.158484   49274 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 17:58:19.158494   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158501   49274 command_runner.go:130] > # minimum_mappable_uid = -1
	I1028 17:58:19.158507   49274 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1028 17:58:19.158515   49274 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 17:58:19.158521   49274 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 17:58:19.158531   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158536   49274 command_runner.go:130] > # minimum_mappable_gid = -1
	I1028 17:58:19.158544   49274 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1028 17:58:19.158550   49274 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1028 17:58:19.158557   49274 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1028 17:58:19.158561   49274 command_runner.go:130] > # ctr_stop_timeout = 30
	I1028 17:58:19.158569   49274 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1028 17:58:19.158575   49274 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1028 17:58:19.158582   49274 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1028 17:58:19.158587   49274 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1028 17:58:19.158593   49274 command_runner.go:130] > drop_infra_ctr = false
	I1028 17:58:19.158599   49274 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1028 17:58:19.158605   49274 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1028 17:58:19.158612   49274 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1028 17:58:19.158618   49274 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1028 17:58:19.158625   49274 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1028 17:58:19.158632   49274 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1028 17:58:19.158638   49274 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1028 17:58:19.158643   49274 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1028 17:58:19.158647   49274 command_runner.go:130] > # shared_cpuset = ""
	I1028 17:58:19.158653   49274 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1028 17:58:19.158660   49274 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1028 17:58:19.158665   49274 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1028 17:58:19.158673   49274 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1028 17:58:19.158678   49274 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1028 17:58:19.158683   49274 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1028 17:58:19.158689   49274 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1028 17:58:19.158694   49274 command_runner.go:130] > # enable_criu_support = false
	I1028 17:58:19.158700   49274 command_runner.go:130] > # Enable/disable the generation of the container,
	I1028 17:58:19.158708   49274 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1028 17:58:19.158712   49274 command_runner.go:130] > # enable_pod_events = false
	I1028 17:58:19.158721   49274 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 17:58:19.158727   49274 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 17:58:19.158734   49274 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1028 17:58:19.158738   49274 command_runner.go:130] > # default_runtime = "runc"
	I1028 17:58:19.158745   49274 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1028 17:58:19.158752   49274 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1028 17:58:19.158763   49274 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1028 17:58:19.158770   49274 command_runner.go:130] > # creation as a file is not desired either.
	I1028 17:58:19.158778   49274 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1028 17:58:19.158785   49274 command_runner.go:130] > # the hostname is being managed dynamically.
	I1028 17:58:19.158789   49274 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1028 17:58:19.158792   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158798   49274 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1028 17:58:19.158806   49274 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1028 17:58:19.158812   49274 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1028 17:58:19.158820   49274 command_runner.go:130] > # Each entry in the table should follow the format:
	I1028 17:58:19.158823   49274 command_runner.go:130] > #
	I1028 17:58:19.158830   49274 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1028 17:58:19.158835   49274 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1028 17:58:19.158856   49274 command_runner.go:130] > # runtime_type = "oci"
	I1028 17:58:19.158863   49274 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1028 17:58:19.158867   49274 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1028 17:58:19.158872   49274 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1028 17:58:19.158877   49274 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1028 17:58:19.158884   49274 command_runner.go:130] > # monitor_env = []
	I1028 17:58:19.158888   49274 command_runner.go:130] > # privileged_without_host_devices = false
	I1028 17:58:19.158893   49274 command_runner.go:130] > # allowed_annotations = []
	I1028 17:58:19.158900   49274 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1028 17:58:19.158903   49274 command_runner.go:130] > # Where:
	I1028 17:58:19.158909   49274 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1028 17:58:19.158917   49274 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1028 17:58:19.158924   49274 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1028 17:58:19.158932   49274 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1028 17:58:19.158936   49274 command_runner.go:130] > #   in $PATH.
	I1028 17:58:19.158942   49274 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1028 17:58:19.158947   49274 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1028 17:58:19.158953   49274 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1028 17:58:19.158958   49274 command_runner.go:130] > #   state.
	I1028 17:58:19.158964   49274 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1028 17:58:19.158972   49274 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1028 17:58:19.158978   49274 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1028 17:58:19.158986   49274 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1028 17:58:19.158992   49274 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1028 17:58:19.158998   49274 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1028 17:58:19.159003   49274 command_runner.go:130] > #   The currently recognized values are:
	I1028 17:58:19.159009   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1028 17:58:19.159018   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1028 17:58:19.159024   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1028 17:58:19.159032   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1028 17:58:19.159040   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1028 17:58:19.159048   49274 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1028 17:58:19.159054   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1028 17:58:19.159062   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1028 17:58:19.159068   49274 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1028 17:58:19.159076   49274 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1028 17:58:19.159080   49274 command_runner.go:130] > #   deprecated option "conmon".
	I1028 17:58:19.159088   49274 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1028 17:58:19.159096   49274 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1028 17:58:19.159102   49274 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1028 17:58:19.159109   49274 command_runner.go:130] > #   should be moved to the container's cgroup
	I1028 17:58:19.159117   49274 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1028 17:58:19.159124   49274 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1028 17:58:19.159130   49274 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1028 17:58:19.159138   49274 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1028 17:58:19.159141   49274 command_runner.go:130] > #
	I1028 17:58:19.159146   49274 command_runner.go:130] > # Using the seccomp notifier feature:
	I1028 17:58:19.159149   49274 command_runner.go:130] > #
	I1028 17:58:19.159157   49274 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1028 17:58:19.159163   49274 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1028 17:58:19.159168   49274 command_runner.go:130] > #
	I1028 17:58:19.159174   49274 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1028 17:58:19.159180   49274 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1028 17:58:19.159185   49274 command_runner.go:130] > #
	I1028 17:58:19.159190   49274 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1028 17:58:19.159196   49274 command_runner.go:130] > # feature.
	I1028 17:58:19.159198   49274 command_runner.go:130] > #
	I1028 17:58:19.159204   49274 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1028 17:58:19.159212   49274 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1028 17:58:19.159218   49274 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1028 17:58:19.159226   49274 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1028 17:58:19.159232   49274 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1028 17:58:19.159237   49274 command_runner.go:130] > #
	I1028 17:58:19.159243   49274 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1028 17:58:19.159252   49274 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1028 17:58:19.159260   49274 command_runner.go:130] > #
	I1028 17:58:19.159269   49274 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1028 17:58:19.159280   49274 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1028 17:58:19.159285   49274 command_runner.go:130] > #
	I1028 17:58:19.159296   49274 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1028 17:58:19.159308   49274 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1028 17:58:19.159316   49274 command_runner.go:130] > # limitation.
	I1028 17:58:19.159325   49274 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1028 17:58:19.159334   49274 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1028 17:58:19.159347   49274 command_runner.go:130] > runtime_type = "oci"
	I1028 17:58:19.159351   49274 command_runner.go:130] > runtime_root = "/run/runc"
	I1028 17:58:19.159356   49274 command_runner.go:130] > runtime_config_path = ""
	I1028 17:58:19.159361   49274 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1028 17:58:19.159367   49274 command_runner.go:130] > monitor_cgroup = "pod"
	I1028 17:58:19.159371   49274 command_runner.go:130] > monitor_exec_cgroup = ""
	I1028 17:58:19.159374   49274 command_runner.go:130] > monitor_env = [
	I1028 17:58:19.159380   49274 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 17:58:19.159385   49274 command_runner.go:130] > ]
	I1028 17:58:19.159392   49274 command_runner.go:130] > privileged_without_host_devices = false
	I1028 17:58:19.159400   49274 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1028 17:58:19.159406   49274 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1028 17:58:19.159414   49274 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1028 17:58:19.159422   49274 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1028 17:58:19.159432   49274 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1028 17:58:19.159440   49274 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1028 17:58:19.159449   49274 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1028 17:58:19.159458   49274 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1028 17:58:19.159464   49274 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1028 17:58:19.159473   49274 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1028 17:58:19.159478   49274 command_runner.go:130] > # Example:
	I1028 17:58:19.159482   49274 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1028 17:58:19.159489   49274 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1028 17:58:19.159493   49274 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1028 17:58:19.159501   49274 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1028 17:58:19.159505   49274 command_runner.go:130] > # cpuset = 0
	I1028 17:58:19.159511   49274 command_runner.go:130] > # cpushares = "0-1"
	I1028 17:58:19.159515   49274 command_runner.go:130] > # Where:
	I1028 17:58:19.159521   49274 command_runner.go:130] > # The workload name is workload-type.
	I1028 17:58:19.159527   49274 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1028 17:58:19.159534   49274 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1028 17:58:19.159540   49274 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1028 17:58:19.159550   49274 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1028 17:58:19.159555   49274 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1028 17:58:19.159561   49274 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1028 17:58:19.159568   49274 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1028 17:58:19.159575   49274 command_runner.go:130] > # Default value is set to true
	I1028 17:58:19.159579   49274 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1028 17:58:19.159588   49274 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1028 17:58:19.159592   49274 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1028 17:58:19.159599   49274 command_runner.go:130] > # Default value is set to 'false'
	I1028 17:58:19.159603   49274 command_runner.go:130] > # disable_hostport_mapping = false
	I1028 17:58:19.159609   49274 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1028 17:58:19.159612   49274 command_runner.go:130] > #
	I1028 17:58:19.159618   49274 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1028 17:58:19.159624   49274 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1028 17:58:19.159630   49274 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1028 17:58:19.159636   49274 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1028 17:58:19.159641   49274 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1028 17:58:19.159644   49274 command_runner.go:130] > [crio.image]
	I1028 17:58:19.159650   49274 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1028 17:58:19.159655   49274 command_runner.go:130] > # default_transport = "docker://"
	I1028 17:58:19.159660   49274 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1028 17:58:19.159666   49274 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1028 17:58:19.159670   49274 command_runner.go:130] > # global_auth_file = ""
	I1028 17:58:19.159674   49274 command_runner.go:130] > # The image used to instantiate infra containers.
	I1028 17:58:19.159679   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.159683   49274 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1028 17:58:19.159690   49274 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1028 17:58:19.159695   49274 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1028 17:58:19.159700   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.159706   49274 command_runner.go:130] > # pause_image_auth_file = ""
	I1028 17:58:19.159712   49274 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1028 17:58:19.159718   49274 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1028 17:58:19.159725   49274 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1028 17:58:19.159731   49274 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1028 17:58:19.159738   49274 command_runner.go:130] > # pause_command = "/pause"
	I1028 17:58:19.159743   49274 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1028 17:58:19.159749   49274 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1028 17:58:19.159756   49274 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1028 17:58:19.159765   49274 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1028 17:58:19.159771   49274 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1028 17:58:19.159778   49274 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1028 17:58:19.159784   49274 command_runner.go:130] > # pinned_images = [
	I1028 17:58:19.159788   49274 command_runner.go:130] > # ]
	I1028 17:58:19.159796   49274 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1028 17:58:19.159802   49274 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1028 17:58:19.159810   49274 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1028 17:58:19.159818   49274 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1028 17:58:19.159824   49274 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1028 17:58:19.159830   49274 command_runner.go:130] > # signature_policy = ""
	I1028 17:58:19.159835   49274 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1028 17:58:19.159842   49274 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1028 17:58:19.159851   49274 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1028 17:58:19.159857   49274 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1028 17:58:19.159865   49274 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1028 17:58:19.159869   49274 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1028 17:58:19.159878   49274 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1028 17:58:19.159884   49274 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1028 17:58:19.159890   49274 command_runner.go:130] > # changing them here.
	I1028 17:58:19.159895   49274 command_runner.go:130] > # insecure_registries = [
	I1028 17:58:19.159900   49274 command_runner.go:130] > # ]
	I1028 17:58:19.159906   49274 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1028 17:58:19.159913   49274 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1028 17:58:19.159917   49274 command_runner.go:130] > # image_volumes = "mkdir"
	I1028 17:58:19.159923   49274 command_runner.go:130] > # Temporary directory to use for storing big files
	I1028 17:58:19.159927   49274 command_runner.go:130] > # big_files_temporary_dir = ""
	I1028 17:58:19.159933   49274 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1028 17:58:19.159939   49274 command_runner.go:130] > # CNI plugins.
	I1028 17:58:19.159942   49274 command_runner.go:130] > [crio.network]
	I1028 17:58:19.159948   49274 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1028 17:58:19.159955   49274 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1028 17:58:19.159960   49274 command_runner.go:130] > # cni_default_network = ""
	I1028 17:58:19.159969   49274 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1028 17:58:19.159973   49274 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1028 17:58:19.159978   49274 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1028 17:58:19.159984   49274 command_runner.go:130] > # plugin_dirs = [
	I1028 17:58:19.159987   49274 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1028 17:58:19.159991   49274 command_runner.go:130] > # ]
	I1028 17:58:19.159996   49274 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1028 17:58:19.160002   49274 command_runner.go:130] > [crio.metrics]
	I1028 17:58:19.160007   49274 command_runner.go:130] > # Globally enable or disable metrics support.
	I1028 17:58:19.160013   49274 command_runner.go:130] > enable_metrics = true
	I1028 17:58:19.160017   49274 command_runner.go:130] > # Specify enabled metrics collectors.
	I1028 17:58:19.160022   49274 command_runner.go:130] > # Per default all metrics are enabled.
	I1028 17:58:19.160030   49274 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1028 17:58:19.160037   49274 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1028 17:58:19.160045   49274 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1028 17:58:19.160049   49274 command_runner.go:130] > # metrics_collectors = [
	I1028 17:58:19.160055   49274 command_runner.go:130] > # 	"operations",
	I1028 17:58:19.160059   49274 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1028 17:58:19.160064   49274 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1028 17:58:19.160070   49274 command_runner.go:130] > # 	"operations_errors",
	I1028 17:58:19.160074   49274 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1028 17:58:19.160083   49274 command_runner.go:130] > # 	"image_pulls_by_name",
	I1028 17:58:19.160090   49274 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1028 17:58:19.160096   49274 command_runner.go:130] > # 	"image_pulls_failures",
	I1028 17:58:19.160100   49274 command_runner.go:130] > # 	"image_pulls_successes",
	I1028 17:58:19.160107   49274 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1028 17:58:19.160111   49274 command_runner.go:130] > # 	"image_layer_reuse",
	I1028 17:58:19.160115   49274 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1028 17:58:19.160119   49274 command_runner.go:130] > # 	"containers_oom_total",
	I1028 17:58:19.160123   49274 command_runner.go:130] > # 	"containers_oom",
	I1028 17:58:19.160128   49274 command_runner.go:130] > # 	"processes_defunct",
	I1028 17:58:19.160131   49274 command_runner.go:130] > # 	"operations_total",
	I1028 17:58:19.160136   49274 command_runner.go:130] > # 	"operations_latency_seconds",
	I1028 17:58:19.160143   49274 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1028 17:58:19.160147   49274 command_runner.go:130] > # 	"operations_errors_total",
	I1028 17:58:19.160152   49274 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1028 17:58:19.160158   49274 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1028 17:58:19.160162   49274 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1028 17:58:19.160168   49274 command_runner.go:130] > # 	"image_pulls_success_total",
	I1028 17:58:19.160172   49274 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1028 17:58:19.160176   49274 command_runner.go:130] > # 	"containers_oom_count_total",
	I1028 17:58:19.160181   49274 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1028 17:58:19.160185   49274 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1028 17:58:19.160191   49274 command_runner.go:130] > # ]
	I1028 17:58:19.160196   49274 command_runner.go:130] > # The port on which the metrics server will listen.
	I1028 17:58:19.160203   49274 command_runner.go:130] > # metrics_port = 9090
	I1028 17:58:19.160208   49274 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1028 17:58:19.160212   49274 command_runner.go:130] > # metrics_socket = ""
	I1028 17:58:19.160217   49274 command_runner.go:130] > # The certificate for the secure metrics server.
	I1028 17:58:19.160223   49274 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1028 17:58:19.160230   49274 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1028 17:58:19.160234   49274 command_runner.go:130] > # certificate on any modification event.
	I1028 17:58:19.160240   49274 command_runner.go:130] > # metrics_cert = ""
	I1028 17:58:19.160245   49274 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1028 17:58:19.160253   49274 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1028 17:58:19.160260   49274 command_runner.go:130] > # metrics_key = ""
	I1028 17:58:19.160271   49274 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1028 17:58:19.160280   49274 command_runner.go:130] > [crio.tracing]
	I1028 17:58:19.160289   49274 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1028 17:58:19.160298   49274 command_runner.go:130] > # enable_tracing = false
	I1028 17:58:19.160306   49274 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1028 17:58:19.160316   49274 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1028 17:58:19.160326   49274 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1028 17:58:19.160340   49274 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1028 17:58:19.160349   49274 command_runner.go:130] > # CRI-O NRI configuration.
	I1028 17:58:19.160356   49274 command_runner.go:130] > [crio.nri]
	I1028 17:58:19.160363   49274 command_runner.go:130] > # Globally enable or disable NRI.
	I1028 17:58:19.160367   49274 command_runner.go:130] > # enable_nri = false
	I1028 17:58:19.160373   49274 command_runner.go:130] > # NRI socket to listen on.
	I1028 17:58:19.160378   49274 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1028 17:58:19.160384   49274 command_runner.go:130] > # NRI plugin directory to use.
	I1028 17:58:19.160389   49274 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1028 17:58:19.160396   49274 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1028 17:58:19.160401   49274 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1028 17:58:19.160406   49274 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1028 17:58:19.160413   49274 command_runner.go:130] > # nri_disable_connections = false
	I1028 17:58:19.160418   49274 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1028 17:58:19.160423   49274 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1028 17:58:19.160429   49274 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1028 17:58:19.160435   49274 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1028 17:58:19.160441   49274 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1028 17:58:19.160447   49274 command_runner.go:130] > [crio.stats]
	I1028 17:58:19.160453   49274 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1028 17:58:19.160461   49274 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1028 17:58:19.160465   49274 command_runner.go:130] > # stats_collection_period = 0
	I1028 17:58:19.160555   49274 cni.go:84] Creating CNI manager for ""
	I1028 17:58:19.160569   49274 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 17:58:19.160580   49274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:58:19.160605   49274 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-949956 NodeName:multinode-949956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:58:19.160717   49274 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-949956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.203"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:58:19.160774   49274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:58:19.170814   49274 command_runner.go:130] > kubeadm
	I1028 17:58:19.170832   49274 command_runner.go:130] > kubectl
	I1028 17:58:19.170837   49274 command_runner.go:130] > kubelet
	I1028 17:58:19.170925   49274 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:58:19.170977   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 17:58:19.180017   49274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 17:58:19.196276   49274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:58:19.211917   49274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1028 17:58:19.227954   49274 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I1028 17:58:19.231678   49274 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I1028 17:58:19.231715   49274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:58:19.367105   49274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:58:19.382093   49274 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956 for IP: 192.168.39.203
	I1028 17:58:19.382114   49274 certs.go:194] generating shared ca certs ...
	I1028 17:58:19.382131   49274 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:58:19.382298   49274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:58:19.382354   49274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:58:19.382370   49274 certs.go:256] generating profile certs ...
	I1028 17:58:19.382487   49274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/client.key
	I1028 17:58:19.382560   49274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.key.00aa27e5
	I1028 17:58:19.382607   49274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.key
	I1028 17:58:19.382627   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:58:19.382648   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:58:19.382665   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:58:19.382681   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:58:19.382696   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:58:19.382715   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:58:19.382732   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:58:19.382751   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:58:19.382820   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:58:19.382869   49274 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:58:19.382884   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:58:19.382912   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:58:19.382945   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:58:19.382975   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:58:19.383032   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:58:19.383068   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.383088   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.383106   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.383724   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:58:19.408076   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:58:19.431343   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:58:19.454534   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:58:19.477441   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 17:58:19.500412   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:58:19.524138   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:58:19.547533   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:58:19.570628   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:58:19.593563   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:58:19.621654   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:58:19.644520   49274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:58:19.660332   49274 ssh_runner.go:195] Run: openssl version
	I1028 17:58:19.665928   49274 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 17:58:19.666091   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:58:19.676403   49274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.680748   49274 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.680918   49274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.680961   49274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.686441   49274 command_runner.go:130] > 3ec20f2e
	I1028 17:58:19.686497   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:58:19.695227   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:58:19.705265   49274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.709474   49274 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.709626   49274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.709671   49274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.714897   49274 command_runner.go:130] > b5213941
	I1028 17:58:19.715087   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:58:19.723727   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:58:19.733811   49274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.738135   49274 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.738428   49274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.738467   49274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.743815   49274 command_runner.go:130] > 51391683
	I1028 17:58:19.743865   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:58:19.752625   49274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:58:19.756995   49274 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:58:19.757031   49274 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 17:58:19.757040   49274 command_runner.go:130] > Device: 253,1	Inode: 532782      Links: 1
	I1028 17:58:19.757052   49274 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 17:58:19.757065   49274 command_runner.go:130] > Access: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757073   49274 command_runner.go:130] > Modify: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757083   49274 command_runner.go:130] > Change: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757093   49274 command_runner.go:130] >  Birth: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757131   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 17:58:19.762683   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.762737   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 17:58:19.767989   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.768159   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 17:58:19.773529   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.773584   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 17:58:19.778825   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.778997   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 17:58:19.784416   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.784479   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 17:58:19.789781   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.789869   49274 kubeadm.go:392] StartCluster: {Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:58:19.790005   49274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:58:19.790042   49274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:58:19.826311   49274 command_runner.go:130] > 03d9d60b5a766544e09d26870287f8e1126cb621ed21d15ffd8b49316e74b88a
	I1028 17:58:19.826347   49274 command_runner.go:130] > f561182f7915f4870775ac8540f78f768c4b6604993a02d6fee46c33b43858e3
	I1028 17:58:19.826356   49274 command_runner.go:130] > 5500ceb27304706a4e21106a195e38a6d57c4ee046146168e8d435a2ceadf143
	I1028 17:58:19.826366   49274 command_runner.go:130] > fd134912be1f33eb5df2f51c1c091b2782b720093e8145a73fc3ced1ed3d61b0
	I1028 17:58:19.826374   49274 command_runner.go:130] > 6be4f0150414ecb719308e654dfe475cc60d922a30553913db6f21c791604523
	I1028 17:58:19.826387   49274 command_runner.go:130] > 4f1fcae7239a1074023a23c8ca05de17f39ebad262a1d6e58d4752e0649431a2
	I1028 17:58:19.826396   49274 command_runner.go:130] > a878b44f1390e731efe4ea8becae131923aee9984a263360abcde7ab1efbaf4c
	I1028 17:58:19.826406   49274 command_runner.go:130] > c8c4b6d9475bbd1e1e80a611f61fe02c69d83a9a3f482001baf1517cf848d1c5
	I1028 17:58:19.826433   49274 cri.go:89] found id: "03d9d60b5a766544e09d26870287f8e1126cb621ed21d15ffd8b49316e74b88a"
	I1028 17:58:19.826444   49274 cri.go:89] found id: "f561182f7915f4870775ac8540f78f768c4b6604993a02d6fee46c33b43858e3"
	I1028 17:58:19.826449   49274 cri.go:89] found id: "5500ceb27304706a4e21106a195e38a6d57c4ee046146168e8d435a2ceadf143"
	I1028 17:58:19.826454   49274 cri.go:89] found id: "fd134912be1f33eb5df2f51c1c091b2782b720093e8145a73fc3ced1ed3d61b0"
	I1028 17:58:19.826461   49274 cri.go:89] found id: "6be4f0150414ecb719308e654dfe475cc60d922a30553913db6f21c791604523"
	I1028 17:58:19.826466   49274 cri.go:89] found id: "4f1fcae7239a1074023a23c8ca05de17f39ebad262a1d6e58d4752e0649431a2"
	I1028 17:58:19.826473   49274 cri.go:89] found id: "a878b44f1390e731efe4ea8becae131923aee9984a263360abcde7ab1efbaf4c"
	I1028 17:58:19.826477   49274 cri.go:89] found id: "c8c4b6d9475bbd1e1e80a611f61fe02c69d83a9a3f482001baf1517cf848d1c5"
	I1028 17:58:19.826480   49274 cri.go:89] found id: ""
	I1028 17:58:19.826513   49274 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-949956 -n multinode-949956
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-949956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (329.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 stop
E1028 18:00:33.437976   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:01:41.463277   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-949956 stop: exit status 82 (2m0.454261066s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-949956-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-949956 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 status: (18.800202735s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr: (3.359693211s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-949956 -n multinode-949956
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 logs -n 25: (1.997559572s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956:/home/docker/cp-test_multinode-949956-m02_multinode-949956.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956 sudo cat                                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m02_multinode-949956.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03:/home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956-m03 sudo cat                                   | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp testdata/cp-test.txt                                                | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile997746669/001/cp-test_multinode-949956-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956:/home/docker/cp-test_multinode-949956-m03_multinode-949956.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956 sudo cat                                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m03_multinode-949956.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt                       | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02:/home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956-m02 sudo cat                                   | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-949956 node stop m03                                                          | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	| node    | multinode-949956 node start                                                             | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-949956                                                                | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:54 UTC |                     |
	| stop    | -p multinode-949956                                                                     | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:54 UTC |                     |
	| start   | -p multinode-949956                                                                     | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 17:56 UTC | 28 Oct 24 18:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-949956                                                                | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 18:00 UTC |                     |
	| node    | multinode-949956 node delete                                                            | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 18:00 UTC | 28 Oct 24 18:00 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-949956 stop                                                                   | multinode-949956 | jenkins | v1.34.0 | 28 Oct 24 18:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:56:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:56:42.795491   49274 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:56:42.795736   49274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:56:42.795745   49274 out.go:358] Setting ErrFile to fd 2...
	I1028 17:56:42.795749   49274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:56:42.795900   49274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:56:42.796406   49274 out.go:352] Setting JSON to false
	I1028 17:56:42.797302   49274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5946,"bootTime":1730132257,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:56:42.797394   49274 start.go:139] virtualization: kvm guest
	I1028 17:56:42.799561   49274 out.go:177] * [multinode-949956] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:56:42.800974   49274 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:56:42.800978   49274 notify.go:220] Checking for updates...
	I1028 17:56:42.803308   49274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:56:42.804570   49274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:56:42.805726   49274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:56:42.806768   49274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:56:42.807913   49274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:56:42.809685   49274 config.go:182] Loaded profile config "multinode-949956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:56:42.809798   49274 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:56:42.810447   49274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:56:42.810505   49274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:56:42.825900   49274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I1028 17:56:42.826355   49274 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:56:42.826938   49274 main.go:141] libmachine: Using API Version  1
	I1028 17:56:42.826957   49274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:56:42.827302   49274 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:56:42.827507   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:56:42.861296   49274 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 17:56:42.862509   49274 start.go:297] selected driver: kvm2
	I1028 17:56:42.862523   49274 start.go:901] validating driver "kvm2" against &{Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:56:42.862694   49274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:56:42.863012   49274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:56:42.863081   49274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:56:42.876698   49274 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:56:42.877437   49274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 17:56:42.877465   49274 cni.go:84] Creating CNI manager for ""
	I1028 17:56:42.877525   49274 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 17:56:42.877579   49274 start.go:340] cluster config:
	{Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-949956 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:56:42.877715   49274 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:56:42.879466   49274 out.go:177] * Starting "multinode-949956" primary control-plane node in "multinode-949956" cluster
	I1028 17:56:42.880789   49274 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:56:42.880820   49274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:56:42.880830   49274 cache.go:56] Caching tarball of preloaded images
	I1028 17:56:42.880909   49274 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 17:56:42.880922   49274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 17:56:42.881029   49274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/config.json ...
	I1028 17:56:42.881214   49274 start.go:360] acquireMachinesLock for multinode-949956: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 17:56:42.881268   49274 start.go:364] duration metric: took 37.034µs to acquireMachinesLock for "multinode-949956"
	I1028 17:56:42.881281   49274 start.go:96] Skipping create...Using existing machine configuration
	I1028 17:56:42.881288   49274 fix.go:54] fixHost starting: 
	I1028 17:56:42.881572   49274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:56:42.881634   49274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:56:42.895108   49274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1028 17:56:42.895455   49274 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:56:42.895935   49274 main.go:141] libmachine: Using API Version  1
	I1028 17:56:42.895954   49274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:56:42.896270   49274 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:56:42.896422   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:56:42.896573   49274 main.go:141] libmachine: (multinode-949956) Calling .GetState
	I1028 17:56:42.898001   49274 fix.go:112] recreateIfNeeded on multinode-949956: state=Running err=<nil>
	W1028 17:56:42.898044   49274 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 17:56:42.899815   49274 out.go:177] * Updating the running kvm2 "multinode-949956" VM ...
	I1028 17:56:42.901034   49274 machine.go:93] provisionDockerMachine start ...
	I1028 17:56:42.901053   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:56:42.901258   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:42.903650   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:42.904098   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:42.904125   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:42.904201   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:42.904360   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:42.904508   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:42.904627   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:42.904761   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:42.904941   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:42.904952   49274 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 17:56:43.005321   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-949956
	
	I1028 17:56:43.005351   49274 main.go:141] libmachine: (multinode-949956) Calling .GetMachineName
	I1028 17:56:43.005570   49274 buildroot.go:166] provisioning hostname "multinode-949956"
	I1028 17:56:43.005592   49274 main.go:141] libmachine: (multinode-949956) Calling .GetMachineName
	I1028 17:56:43.005797   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.008187   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.008628   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.008653   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.008734   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.008885   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.009000   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.009104   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.009248   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:43.009443   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:43.009455   49274 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-949956 && echo "multinode-949956" | sudo tee /etc/hostname
	I1028 17:56:43.125205   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-949956
	
	I1028 17:56:43.125229   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.128048   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.128440   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.128502   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.128690   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.128872   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.128999   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.129143   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.129310   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:43.129470   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:43.129485   49274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-949956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-949956/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-949956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 17:56:43.225118   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 17:56:43.225148   49274 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 17:56:43.225165   49274 buildroot.go:174] setting up certificates
	I1028 17:56:43.225174   49274 provision.go:84] configureAuth start
	I1028 17:56:43.225182   49274 main.go:141] libmachine: (multinode-949956) Calling .GetMachineName
	I1028 17:56:43.225411   49274 main.go:141] libmachine: (multinode-949956) Calling .GetIP
	I1028 17:56:43.227730   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.228085   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.228114   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.228234   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.230320   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.230662   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.230692   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.230781   49274 provision.go:143] copyHostCerts
	I1028 17:56:43.230810   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:56:43.230860   49274 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 17:56:43.230876   49274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 17:56:43.230959   49274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 17:56:43.231042   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:56:43.231066   49274 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 17:56:43.231071   49274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 17:56:43.231104   49274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 17:56:43.231159   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:56:43.231185   49274 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 17:56:43.231194   49274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 17:56:43.231232   49274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 17:56:43.231305   49274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.multinode-949956 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-949956]
	I1028 17:56:43.588931   49274 provision.go:177] copyRemoteCerts
	I1028 17:56:43.589010   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 17:56:43.589037   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.591865   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.592239   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.592277   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.592511   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.592705   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.592848   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.592979   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:56:43.671075   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 17:56:43.671134   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 17:56:43.695989   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 17:56:43.696058   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 17:56:43.719491   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 17:56:43.719553   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 17:56:43.743666   49274 provision.go:87] duration metric: took 518.481902ms to configureAuth
	I1028 17:56:43.743691   49274 buildroot.go:189] setting minikube options for container-runtime
	I1028 17:56:43.743886   49274 config.go:182] Loaded profile config "multinode-949956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:56:43.743954   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:56:43.746536   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.746843   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:56:43.746871   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:56:43.746995   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:56:43.747164   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.747321   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:56:43.747486   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:56:43.747665   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:56:43.747820   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:56:43.747833   49274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 17:58:14.326687   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 17:58:14.326749   49274 machine.go:96] duration metric: took 1m31.425701049s to provisionDockerMachine
	I1028 17:58:14.326772   49274 start.go:293] postStartSetup for "multinode-949956" (driver="kvm2")
	I1028 17:58:14.326795   49274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 17:58:14.326823   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.327191   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 17:58:14.327236   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.330177   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.330690   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.330714   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.330859   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.331027   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.331165   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.331310   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:58:14.411915   49274 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 17:58:14.416035   49274 command_runner.go:130] > NAME=Buildroot
	I1028 17:58:14.416056   49274 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 17:58:14.416063   49274 command_runner.go:130] > ID=buildroot
	I1028 17:58:14.416071   49274 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 17:58:14.416079   49274 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 17:58:14.416114   49274 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 17:58:14.416134   49274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 17:58:14.416221   49274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 17:58:14.416313   49274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 17:58:14.416334   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /etc/ssl/certs/206802.pem
	I1028 17:58:14.416439   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 17:58:14.425832   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:58:14.448747   49274 start.go:296] duration metric: took 121.964321ms for postStartSetup
	I1028 17:58:14.448817   49274 fix.go:56] duration metric: took 1m31.567527114s for fixHost
	I1028 17:58:14.448850   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.451332   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.451767   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.451795   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.451941   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.452094   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.452238   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.452341   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.452501   49274 main.go:141] libmachine: Using SSH client type: native
	I1028 17:58:14.452653   49274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1028 17:58:14.452663   49274 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 17:58:14.549104   49274 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730138294.534079014
	
	I1028 17:58:14.549129   49274 fix.go:216] guest clock: 1730138294.534079014
	I1028 17:58:14.549138   49274 fix.go:229] Guest: 2024-10-28 17:58:14.534079014 +0000 UTC Remote: 2024-10-28 17:58:14.448828065 +0000 UTC m=+91.689815453 (delta=85.250949ms)
	I1028 17:58:14.549186   49274 fix.go:200] guest clock delta is within tolerance: 85.250949ms
	I1028 17:58:14.549196   49274 start.go:83] releasing machines lock for "multinode-949956", held for 1m31.667918735s
	I1028 17:58:14.549229   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.549482   49274 main.go:141] libmachine: (multinode-949956) Calling .GetIP
	I1028 17:58:14.551904   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.552196   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.552224   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.552371   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.552816   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.552977   49274 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:58:14.553068   49274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 17:58:14.553111   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.553184   49274 ssh_runner.go:195] Run: cat /version.json
	I1028 17:58:14.553204   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:58:14.555495   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.555774   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.555802   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.555824   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.555983   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.556133   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.556220   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:14.556244   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:14.556278   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.556372   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:58:14.556428   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:58:14.556596   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:58:14.556711   49274 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:58:14.556856   49274 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:58:14.628748   49274 command_runner.go:130] > {"iso_version": "v1.34.0-1730109979-19872", "kicbase_version": "v0.0.45-1729876044-19868", "minikube_version": "v1.34.0", "commit": "3cd67be5b3d326faa45da4684b85954cdc5db093"}
	I1028 17:58:14.629039   49274 ssh_runner.go:195] Run: systemctl --version
	I1028 17:58:14.653472   49274 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1028 17:58:14.654096   49274 command_runner.go:130] > systemd 252 (252)
	I1028 17:58:14.654140   49274 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 17:58:14.654204   49274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 17:58:14.812857   49274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 17:58:14.819655   49274 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 17:58:14.819685   49274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 17:58:14.819728   49274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 17:58:14.828600   49274 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 17:58:14.828617   49274 start.go:495] detecting cgroup driver to use...
	I1028 17:58:14.828692   49274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 17:58:14.844051   49274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 17:58:14.858251   49274 docker.go:217] disabling cri-docker service (if available) ...
	I1028 17:58:14.858302   49274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 17:58:14.871660   49274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 17:58:14.884720   49274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 17:58:15.031965   49274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 17:58:15.188614   49274 docker.go:233] disabling docker service ...
	I1028 17:58:15.188690   49274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 17:58:15.206098   49274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 17:58:15.219926   49274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 17:58:15.373248   49274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 17:58:15.513548   49274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 17:58:15.526847   49274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 17:58:15.545832   49274 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1028 17:58:15.546310   49274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 17:58:15.546378   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.556616   49274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 17:58:15.556667   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.566432   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.576318   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.586186   49274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 17:58:15.596312   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.607150   49274 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.618446   49274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 17:58:15.628434   49274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 17:58:15.637274   49274 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 17:58:15.637316   49274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 17:58:15.646164   49274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:58:15.784292   49274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 17:58:18.911346   49274 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.127019951s)
	I1028 17:58:18.911370   49274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 17:58:18.911408   49274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 17:58:18.916448   49274 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1028 17:58:18.916474   49274 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 17:58:18.916484   49274 command_runner.go:130] > Device: 0,22	Inode: 1259        Links: 1
	I1028 17:58:18.916494   49274 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 17:58:18.916502   49274 command_runner.go:130] > Access: 2024-10-28 17:58:18.795263900 +0000
	I1028 17:58:18.916514   49274 command_runner.go:130] > Modify: 2024-10-28 17:58:18.795263900 +0000
	I1028 17:58:18.916521   49274 command_runner.go:130] > Change: 2024-10-28 17:58:18.795263900 +0000
	I1028 17:58:18.916531   49274 command_runner.go:130] >  Birth: -
	I1028 17:58:18.916668   49274 start.go:563] Will wait 60s for crictl version
	I1028 17:58:18.916709   49274 ssh_runner.go:195] Run: which crictl
	I1028 17:58:18.920316   49274 command_runner.go:130] > /usr/bin/crictl
	I1028 17:58:18.920515   49274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 17:58:18.957350   49274 command_runner.go:130] > Version:  0.1.0
	I1028 17:58:18.957372   49274 command_runner.go:130] > RuntimeName:  cri-o
	I1028 17:58:18.957377   49274 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1028 17:58:18.957382   49274 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 17:58:18.958238   49274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 17:58:18.958320   49274 ssh_runner.go:195] Run: crio --version
	I1028 17:58:18.984782   49274 command_runner.go:130] > crio version 1.29.1
	I1028 17:58:18.984799   49274 command_runner.go:130] > Version:        1.29.1
	I1028 17:58:18.984818   49274 command_runner.go:130] > GitCommit:      unknown
	I1028 17:58:18.984823   49274 command_runner.go:130] > GitCommitDate:  unknown
	I1028 17:58:18.984830   49274 command_runner.go:130] > GitTreeState:   clean
	I1028 17:58:18.984837   49274 command_runner.go:130] > BuildDate:      2024-10-28T15:50:52Z
	I1028 17:58:18.984844   49274 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 17:58:18.984851   49274 command_runner.go:130] > Compiler:       gc
	I1028 17:58:18.984861   49274 command_runner.go:130] > Platform:       linux/amd64
	I1028 17:58:18.984875   49274 command_runner.go:130] > Linkmode:       dynamic
	I1028 17:58:18.984885   49274 command_runner.go:130] > BuildTags:      
	I1028 17:58:18.984889   49274 command_runner.go:130] >   containers_image_ostree_stub
	I1028 17:58:18.984896   49274 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 17:58:18.984900   49274 command_runner.go:130] >   btrfs_noversion
	I1028 17:58:18.984905   49274 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 17:58:18.984910   49274 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 17:58:18.984913   49274 command_runner.go:130] >   seccomp
	I1028 17:58:18.984917   49274 command_runner.go:130] > LDFlags:          unknown
	I1028 17:58:18.984922   49274 command_runner.go:130] > SeccompEnabled:   true
	I1028 17:58:18.984926   49274 command_runner.go:130] > AppArmorEnabled:  false
	I1028 17:58:18.985951   49274 ssh_runner.go:195] Run: crio --version
	I1028 17:58:19.012146   49274 command_runner.go:130] > crio version 1.29.1
	I1028 17:58:19.012164   49274 command_runner.go:130] > Version:        1.29.1
	I1028 17:58:19.012171   49274 command_runner.go:130] > GitCommit:      unknown
	I1028 17:58:19.012179   49274 command_runner.go:130] > GitCommitDate:  unknown
	I1028 17:58:19.012184   49274 command_runner.go:130] > GitTreeState:   clean
	I1028 17:58:19.012193   49274 command_runner.go:130] > BuildDate:      2024-10-28T15:50:52Z
	I1028 17:58:19.012203   49274 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 17:58:19.012209   49274 command_runner.go:130] > Compiler:       gc
	I1028 17:58:19.012218   49274 command_runner.go:130] > Platform:       linux/amd64
	I1028 17:58:19.012225   49274 command_runner.go:130] > Linkmode:       dynamic
	I1028 17:58:19.012246   49274 command_runner.go:130] > BuildTags:      
	I1028 17:58:19.012256   49274 command_runner.go:130] >   containers_image_ostree_stub
	I1028 17:58:19.012263   49274 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 17:58:19.012269   49274 command_runner.go:130] >   btrfs_noversion
	I1028 17:58:19.012279   49274 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 17:58:19.012287   49274 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 17:58:19.012293   49274 command_runner.go:130] >   seccomp
	I1028 17:58:19.012300   49274 command_runner.go:130] > LDFlags:          unknown
	I1028 17:58:19.012308   49274 command_runner.go:130] > SeccompEnabled:   true
	I1028 17:58:19.012314   49274 command_runner.go:130] > AppArmorEnabled:  false
	I1028 17:58:19.015354   49274 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 17:58:19.016678   49274 main.go:141] libmachine: (multinode-949956) Calling .GetIP
	I1028 17:58:19.019118   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:19.019468   49274 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:58:19.019486   49274 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:58:19.019730   49274 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 17:58:19.023902   49274 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1028 17:58:19.024035   49274 kubeadm.go:883] updating cluster {Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 17:58:19.024183   49274 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:58:19.024222   49274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:58:19.068321   49274 command_runner.go:130] > {
	I1028 17:58:19.068346   49274 command_runner.go:130] >   "images": [
	I1028 17:58:19.068350   49274 command_runner.go:130] >     {
	I1028 17:58:19.068359   49274 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 17:58:19.068363   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068373   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 17:58:19.068379   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068386   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068397   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 17:58:19.068412   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 17:58:19.068419   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068424   49274 command_runner.go:130] >       "size": "94965812",
	I1028 17:58:19.068430   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068434   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068443   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068449   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068453   49274 command_runner.go:130] >     },
	I1028 17:58:19.068459   49274 command_runner.go:130] >     {
	I1028 17:58:19.068465   49274 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 17:58:19.068485   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068493   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 17:58:19.068502   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068509   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068520   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 17:58:19.068529   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 17:58:19.068533   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068537   49274 command_runner.go:130] >       "size": "1363676",
	I1028 17:58:19.068541   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068550   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068559   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068565   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068575   49274 command_runner.go:130] >     },
	I1028 17:58:19.068581   49274 command_runner.go:130] >     {
	I1028 17:58:19.068591   49274 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 17:58:19.068603   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068613   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 17:58:19.068621   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068629   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068643   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 17:58:19.068660   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 17:58:19.068669   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068676   49274 command_runner.go:130] >       "size": "31470524",
	I1028 17:58:19.068685   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068693   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068700   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068705   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068709   49274 command_runner.go:130] >     },
	I1028 17:58:19.068713   49274 command_runner.go:130] >     {
	I1028 17:58:19.068725   49274 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 17:58:19.068734   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068743   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 17:58:19.068752   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068761   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068775   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 17:58:19.068793   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 17:58:19.068799   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068804   49274 command_runner.go:130] >       "size": "63273227",
	I1028 17:58:19.068814   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.068825   49274 command_runner.go:130] >       "username": "nonroot",
	I1028 17:58:19.068834   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.068843   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.068850   49274 command_runner.go:130] >     },
	I1028 17:58:19.068859   49274 command_runner.go:130] >     {
	I1028 17:58:19.068871   49274 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 17:58:19.068879   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.068885   49274 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 17:58:19.068893   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068902   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.068916   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 17:58:19.068930   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 17:58:19.068938   49274 command_runner.go:130] >       ],
	I1028 17:58:19.068945   49274 command_runner.go:130] >       "size": "149009664",
	I1028 17:58:19.068953   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.068962   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.068968   49274 command_runner.go:130] >       },
	I1028 17:58:19.068974   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.068983   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069011   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069021   49274 command_runner.go:130] >     },
	I1028 17:58:19.069026   49274 command_runner.go:130] >     {
	I1028 17:58:19.069036   49274 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 17:58:19.069045   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069054   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 17:58:19.069062   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069072   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069086   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 17:58:19.069100   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 17:58:19.069109   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069118   49274 command_runner.go:130] >       "size": "95274464",
	I1028 17:58:19.069124   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069133   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.069140   49274 command_runner.go:130] >       },
	I1028 17:58:19.069144   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069152   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069162   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069171   49274 command_runner.go:130] >     },
	I1028 17:58:19.069179   49274 command_runner.go:130] >     {
	I1028 17:58:19.069188   49274 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 17:58:19.069197   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069209   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 17:58:19.069218   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069225   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069235   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 17:58:19.069249   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 17:58:19.069259   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069266   49274 command_runner.go:130] >       "size": "89474374",
	I1028 17:58:19.069275   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069283   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.069292   49274 command_runner.go:130] >       },
	I1028 17:58:19.069301   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069307   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069313   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069317   49274 command_runner.go:130] >     },
	I1028 17:58:19.069325   49274 command_runner.go:130] >     {
	I1028 17:58:19.069339   49274 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 17:58:19.069348   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069359   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 17:58:19.069367   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069376   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069397   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 17:58:19.069411   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 17:58:19.069420   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069430   49274 command_runner.go:130] >       "size": "92783513",
	I1028 17:58:19.069440   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.069446   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069453   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069459   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069465   49274 command_runner.go:130] >     },
	I1028 17:58:19.069470   49274 command_runner.go:130] >     {
	I1028 17:58:19.069478   49274 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 17:58:19.069485   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069494   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 17:58:19.069502   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069514   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069527   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 17:58:19.069542   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 17:58:19.069550   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069559   49274 command_runner.go:130] >       "size": "68457798",
	I1028 17:58:19.069566   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069570   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.069574   49274 command_runner.go:130] >       },
	I1028 17:58:19.069583   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069593   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069603   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.069611   49274 command_runner.go:130] >     },
	I1028 17:58:19.069619   49274 command_runner.go:130] >     {
	I1028 17:58:19.069631   49274 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 17:58:19.069641   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.069650   49274 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 17:58:19.069654   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069662   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.069678   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 17:58:19.069692   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 17:58:19.069700   49274 command_runner.go:130] >       ],
	I1028 17:58:19.069709   49274 command_runner.go:130] >       "size": "742080",
	I1028 17:58:19.069718   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.069727   49274 command_runner.go:130] >         "value": "65535"
	I1028 17:58:19.069734   49274 command_runner.go:130] >       },
	I1028 17:58:19.069738   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.069742   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.069751   49274 command_runner.go:130] >       "pinned": true
	I1028 17:58:19.069759   49274 command_runner.go:130] >     }
	I1028 17:58:19.069767   49274 command_runner.go:130] >   ]
	I1028 17:58:19.069773   49274 command_runner.go:130] > }
	I1028 17:58:19.069985   49274 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:58:19.070006   49274 crio.go:433] Images already preloaded, skipping extraction
	I1028 17:58:19.070071   49274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 17:58:19.102247   49274 command_runner.go:130] > {
	I1028 17:58:19.102269   49274 command_runner.go:130] >   "images": [
	I1028 17:58:19.102276   49274 command_runner.go:130] >     {
	I1028 17:58:19.102288   49274 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 17:58:19.102295   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102310   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 17:58:19.102316   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102320   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102329   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 17:58:19.102336   49274 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 17:58:19.102343   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102348   49274 command_runner.go:130] >       "size": "94965812",
	I1028 17:58:19.102352   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102363   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102373   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102387   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102393   49274 command_runner.go:130] >     },
	I1028 17:58:19.102399   49274 command_runner.go:130] >     {
	I1028 17:58:19.102410   49274 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 17:58:19.102415   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102421   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 17:58:19.102425   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102441   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102453   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 17:58:19.102468   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 17:58:19.102477   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102487   49274 command_runner.go:130] >       "size": "1363676",
	I1028 17:58:19.102496   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102508   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102515   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102520   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102526   49274 command_runner.go:130] >     },
	I1028 17:58:19.102530   49274 command_runner.go:130] >     {
	I1028 17:58:19.102540   49274 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 17:58:19.102550   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102562   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 17:58:19.102570   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102579   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102594   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 17:58:19.102608   49274 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 17:58:19.102615   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102619   49274 command_runner.go:130] >       "size": "31470524",
	I1028 17:58:19.102627   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102636   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102646   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102656   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102664   49274 command_runner.go:130] >     },
	I1028 17:58:19.102673   49274 command_runner.go:130] >     {
	I1028 17:58:19.102686   49274 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 17:58:19.102694   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102700   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 17:58:19.102706   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102713   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102728   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 17:58:19.102746   49274 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 17:58:19.102757   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102766   49274 command_runner.go:130] >       "size": "63273227",
	I1028 17:58:19.102776   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.102784   49274 command_runner.go:130] >       "username": "nonroot",
	I1028 17:58:19.102791   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102797   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102807   49274 command_runner.go:130] >     },
	I1028 17:58:19.102815   49274 command_runner.go:130] >     {
	I1028 17:58:19.102828   49274 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 17:58:19.102838   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.102848   49274 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 17:58:19.102857   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102864   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.102874   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 17:58:19.102887   49274 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 17:58:19.102896   49274 command_runner.go:130] >       ],
	I1028 17:58:19.102907   49274 command_runner.go:130] >       "size": "149009664",
	I1028 17:58:19.102916   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.102925   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.102931   49274 command_runner.go:130] >       },
	I1028 17:58:19.102940   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.102947   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.102954   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.102959   49274 command_runner.go:130] >     },
	I1028 17:58:19.102964   49274 command_runner.go:130] >     {
	I1028 17:58:19.103007   49274 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 17:58:19.103024   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103033   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 17:58:19.103040   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103046   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103061   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 17:58:19.103075   49274 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 17:58:19.103081   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103090   49274 command_runner.go:130] >       "size": "95274464",
	I1028 17:58:19.103099   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103106   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.103114   49274 command_runner.go:130] >       },
	I1028 17:58:19.103121   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103130   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103135   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103144   49274 command_runner.go:130] >     },
	I1028 17:58:19.103149   49274 command_runner.go:130] >     {
	I1028 17:58:19.103163   49274 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 17:58:19.103172   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103181   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 17:58:19.103190   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103199   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103213   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 17:58:19.103223   49274 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 17:58:19.103232   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103242   49274 command_runner.go:130] >       "size": "89474374",
	I1028 17:58:19.103251   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103260   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.103269   49274 command_runner.go:130] >       },
	I1028 17:58:19.103278   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103287   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103296   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103303   49274 command_runner.go:130] >     },
	I1028 17:58:19.103306   49274 command_runner.go:130] >     {
	I1028 17:58:19.103318   49274 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 17:58:19.103328   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103339   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 17:58:19.103348   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103356   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103377   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 17:58:19.103389   49274 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 17:58:19.103396   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103405   49274 command_runner.go:130] >       "size": "92783513",
	I1028 17:58:19.103414   49274 command_runner.go:130] >       "uid": null,
	I1028 17:58:19.103424   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103433   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103439   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103447   49274 command_runner.go:130] >     },
	I1028 17:58:19.103453   49274 command_runner.go:130] >     {
	I1028 17:58:19.103465   49274 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 17:58:19.103473   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103478   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 17:58:19.103485   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103495   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103510   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 17:58:19.103524   49274 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 17:58:19.103532   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103541   49274 command_runner.go:130] >       "size": "68457798",
	I1028 17:58:19.103550   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103558   49274 command_runner.go:130] >         "value": "0"
	I1028 17:58:19.103562   49274 command_runner.go:130] >       },
	I1028 17:58:19.103568   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103577   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103587   49274 command_runner.go:130] >       "pinned": false
	I1028 17:58:19.103595   49274 command_runner.go:130] >     },
	I1028 17:58:19.103604   49274 command_runner.go:130] >     {
	I1028 17:58:19.103614   49274 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 17:58:19.103623   49274 command_runner.go:130] >       "repoTags": [
	I1028 17:58:19.103633   49274 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 17:58:19.103641   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103646   49274 command_runner.go:130] >       "repoDigests": [
	I1028 17:58:19.103658   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 17:58:19.103673   49274 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 17:58:19.103683   49274 command_runner.go:130] >       ],
	I1028 17:58:19.103695   49274 command_runner.go:130] >       "size": "742080",
	I1028 17:58:19.103704   49274 command_runner.go:130] >       "uid": {
	I1028 17:58:19.103714   49274 command_runner.go:130] >         "value": "65535"
	I1028 17:58:19.103722   49274 command_runner.go:130] >       },
	I1028 17:58:19.103728   49274 command_runner.go:130] >       "username": "",
	I1028 17:58:19.103732   49274 command_runner.go:130] >       "spec": null,
	I1028 17:58:19.103740   49274 command_runner.go:130] >       "pinned": true
	I1028 17:58:19.103748   49274 command_runner.go:130] >     }
	I1028 17:58:19.103757   49274 command_runner.go:130] >   ]
	I1028 17:58:19.103765   49274 command_runner.go:130] > }
	I1028 17:58:19.103915   49274 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 17:58:19.103929   49274 cache_images.go:84] Images are preloaded, skipping loading
	I1028 17:58:19.103943   49274 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.2 crio true true} ...
	I1028 17:58:19.104089   49274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-949956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 17:58:19.104181   49274 ssh_runner.go:195] Run: crio config
	I1028 17:58:19.139504   49274 command_runner.go:130] ! time="2024-10-28 17:58:19.124706900Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1028 17:58:19.146145   49274 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1028 17:58:19.156713   49274 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1028 17:58:19.156739   49274 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1028 17:58:19.156750   49274 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1028 17:58:19.156755   49274 command_runner.go:130] > #
	I1028 17:58:19.156767   49274 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1028 17:58:19.156781   49274 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1028 17:58:19.156790   49274 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1028 17:58:19.156835   49274 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1028 17:58:19.156847   49274 command_runner.go:130] > # reload'.
	I1028 17:58:19.156856   49274 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1028 17:58:19.156866   49274 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1028 17:58:19.156876   49274 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1028 17:58:19.156885   49274 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1028 17:58:19.156891   49274 command_runner.go:130] > [crio]
	I1028 17:58:19.156901   49274 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1028 17:58:19.156912   49274 command_runner.go:130] > # containers images, in this directory.
	I1028 17:58:19.156919   49274 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1028 17:58:19.156932   49274 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1028 17:58:19.156944   49274 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1028 17:58:19.156957   49274 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1028 17:58:19.156964   49274 command_runner.go:130] > # imagestore = ""
	I1028 17:58:19.156974   49274 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1028 17:58:19.156983   49274 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1028 17:58:19.156991   49274 command_runner.go:130] > storage_driver = "overlay"
	I1028 17:58:19.157001   49274 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1028 17:58:19.157014   49274 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1028 17:58:19.157021   49274 command_runner.go:130] > storage_option = [
	I1028 17:58:19.157032   49274 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1028 17:58:19.157040   49274 command_runner.go:130] > ]
	I1028 17:58:19.157049   49274 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1028 17:58:19.157058   49274 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1028 17:58:19.157063   49274 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1028 17:58:19.157071   49274 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1028 17:58:19.157077   49274 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1028 17:58:19.157081   49274 command_runner.go:130] > # always happen on a node reboot
	I1028 17:58:19.157087   49274 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1028 17:58:19.157098   49274 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1028 17:58:19.157106   49274 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1028 17:58:19.157114   49274 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1028 17:58:19.157121   49274 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1028 17:58:19.157131   49274 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1028 17:58:19.157141   49274 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1028 17:58:19.157147   49274 command_runner.go:130] > # internal_wipe = true
	I1028 17:58:19.157155   49274 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1028 17:58:19.157163   49274 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1028 17:58:19.157167   49274 command_runner.go:130] > # internal_repair = false
	I1028 17:58:19.157175   49274 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1028 17:58:19.157181   49274 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1028 17:58:19.157186   49274 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1028 17:58:19.157192   49274 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1028 17:58:19.157199   49274 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1028 17:58:19.157207   49274 command_runner.go:130] > [crio.api]
	I1028 17:58:19.157212   49274 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1028 17:58:19.157219   49274 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1028 17:58:19.157225   49274 command_runner.go:130] > # IP address on which the stream server will listen.
	I1028 17:58:19.157232   49274 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1028 17:58:19.157239   49274 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1028 17:58:19.157246   49274 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1028 17:58:19.157250   49274 command_runner.go:130] > # stream_port = "0"
	I1028 17:58:19.157260   49274 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1028 17:58:19.157270   49274 command_runner.go:130] > # stream_enable_tls = false
	I1028 17:58:19.157279   49274 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1028 17:58:19.157289   49274 command_runner.go:130] > # stream_idle_timeout = ""
	I1028 17:58:19.157298   49274 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1028 17:58:19.157311   49274 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1028 17:58:19.157319   49274 command_runner.go:130] > # minutes.
	I1028 17:58:19.157326   49274 command_runner.go:130] > # stream_tls_cert = ""
	I1028 17:58:19.157343   49274 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1028 17:58:19.157356   49274 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1028 17:58:19.157364   49274 command_runner.go:130] > # stream_tls_key = ""
	I1028 17:58:19.157370   49274 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1028 17:58:19.157377   49274 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1028 17:58:19.157392   49274 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1028 17:58:19.157398   49274 command_runner.go:130] > # stream_tls_ca = ""
	I1028 17:58:19.157406   49274 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 17:58:19.157412   49274 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1028 17:58:19.157420   49274 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 17:58:19.157427   49274 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1028 17:58:19.157433   49274 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1028 17:58:19.157441   49274 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1028 17:58:19.157445   49274 command_runner.go:130] > [crio.runtime]
	I1028 17:58:19.157451   49274 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1028 17:58:19.157457   49274 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1028 17:58:19.157463   49274 command_runner.go:130] > # "nofile=1024:2048"
	I1028 17:58:19.157469   49274 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1028 17:58:19.157476   49274 command_runner.go:130] > # default_ulimits = [
	I1028 17:58:19.157479   49274 command_runner.go:130] > # ]
	I1028 17:58:19.157485   49274 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1028 17:58:19.157489   49274 command_runner.go:130] > # no_pivot = false
	I1028 17:58:19.157494   49274 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1028 17:58:19.157502   49274 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1028 17:58:19.157507   49274 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1028 17:58:19.157515   49274 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1028 17:58:19.157520   49274 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1028 17:58:19.157527   49274 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 17:58:19.157533   49274 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1028 17:58:19.157537   49274 command_runner.go:130] > # Cgroup setting for conmon
	I1028 17:58:19.157544   49274 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1028 17:58:19.157550   49274 command_runner.go:130] > conmon_cgroup = "pod"
	I1028 17:58:19.157556   49274 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1028 17:58:19.157563   49274 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1028 17:58:19.157570   49274 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 17:58:19.157576   49274 command_runner.go:130] > conmon_env = [
	I1028 17:58:19.157582   49274 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 17:58:19.157585   49274 command_runner.go:130] > ]
	I1028 17:58:19.157590   49274 command_runner.go:130] > # Additional environment variables to set for all the
	I1028 17:58:19.157597   49274 command_runner.go:130] > # containers. These are overridden if set in the
	I1028 17:58:19.157602   49274 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1028 17:58:19.157608   49274 command_runner.go:130] > # default_env = [
	I1028 17:58:19.157611   49274 command_runner.go:130] > # ]
	I1028 17:58:19.157617   49274 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1028 17:58:19.157627   49274 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1028 17:58:19.157631   49274 command_runner.go:130] > # selinux = false
	I1028 17:58:19.157637   49274 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1028 17:58:19.157645   49274 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1028 17:58:19.157651   49274 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1028 17:58:19.157657   49274 command_runner.go:130] > # seccomp_profile = ""
	I1028 17:58:19.157663   49274 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1028 17:58:19.157671   49274 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1028 17:58:19.157677   49274 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1028 17:58:19.157683   49274 command_runner.go:130] > # which might increase security.
	I1028 17:58:19.157687   49274 command_runner.go:130] > # This option is currently deprecated,
	I1028 17:58:19.157695   49274 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1028 17:58:19.157699   49274 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1028 17:58:19.157709   49274 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1028 17:58:19.157715   49274 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1028 17:58:19.157723   49274 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1028 17:58:19.157729   49274 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1028 17:58:19.157736   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.157741   49274 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1028 17:58:19.157748   49274 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1028 17:58:19.157753   49274 command_runner.go:130] > # the cgroup blockio controller.
	I1028 17:58:19.157759   49274 command_runner.go:130] > # blockio_config_file = ""
	I1028 17:58:19.157766   49274 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1028 17:58:19.157772   49274 command_runner.go:130] > # blockio parameters.
	I1028 17:58:19.157775   49274 command_runner.go:130] > # blockio_reload = false
	I1028 17:58:19.157782   49274 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1028 17:58:19.157788   49274 command_runner.go:130] > # irqbalance daemon.
	I1028 17:58:19.157793   49274 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1028 17:58:19.157800   49274 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1028 17:58:19.157808   49274 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1028 17:58:19.157816   49274 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1028 17:58:19.157821   49274 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1028 17:58:19.157829   49274 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1028 17:58:19.157835   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.157841   49274 command_runner.go:130] > # rdt_config_file = ""
	I1028 17:58:19.157846   49274 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1028 17:58:19.157851   49274 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1028 17:58:19.157866   49274 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1028 17:58:19.157873   49274 command_runner.go:130] > # separate_pull_cgroup = ""
	I1028 17:58:19.157879   49274 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1028 17:58:19.157888   49274 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1028 17:58:19.157892   49274 command_runner.go:130] > # will be added.
	I1028 17:58:19.157896   49274 command_runner.go:130] > # default_capabilities = [
	I1028 17:58:19.157900   49274 command_runner.go:130] > # 	"CHOWN",
	I1028 17:58:19.157904   49274 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1028 17:58:19.157908   49274 command_runner.go:130] > # 	"FSETID",
	I1028 17:58:19.157912   49274 command_runner.go:130] > # 	"FOWNER",
	I1028 17:58:19.157918   49274 command_runner.go:130] > # 	"SETGID",
	I1028 17:58:19.157921   49274 command_runner.go:130] > # 	"SETUID",
	I1028 17:58:19.157928   49274 command_runner.go:130] > # 	"SETPCAP",
	I1028 17:58:19.157932   49274 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1028 17:58:19.157937   49274 command_runner.go:130] > # 	"KILL",
	I1028 17:58:19.157941   49274 command_runner.go:130] > # ]
	I1028 17:58:19.157947   49274 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1028 17:58:19.157956   49274 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1028 17:58:19.157960   49274 command_runner.go:130] > # add_inheritable_capabilities = false
	I1028 17:58:19.157969   49274 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1028 17:58:19.157975   49274 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 17:58:19.157981   49274 command_runner.go:130] > default_sysctls = [
	I1028 17:58:19.157985   49274 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1028 17:58:19.157989   49274 command_runner.go:130] > ]
	I1028 17:58:19.157994   49274 command_runner.go:130] > # List of devices on the host that a
	I1028 17:58:19.158002   49274 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1028 17:58:19.158006   49274 command_runner.go:130] > # allowed_devices = [
	I1028 17:58:19.158010   49274 command_runner.go:130] > # 	"/dev/fuse",
	I1028 17:58:19.158013   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158018   49274 command_runner.go:130] > # List of additional devices. specified as
	I1028 17:58:19.158027   49274 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1028 17:58:19.158032   49274 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1028 17:58:19.158040   49274 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 17:58:19.158045   49274 command_runner.go:130] > # additional_devices = [
	I1028 17:58:19.158048   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158053   49274 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1028 17:58:19.158059   49274 command_runner.go:130] > # cdi_spec_dirs = [
	I1028 17:58:19.158063   49274 command_runner.go:130] > # 	"/etc/cdi",
	I1028 17:58:19.158066   49274 command_runner.go:130] > # 	"/var/run/cdi",
	I1028 17:58:19.158071   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158077   49274 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1028 17:58:19.158083   49274 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1028 17:58:19.158089   49274 command_runner.go:130] > # Defaults to false.
	I1028 17:58:19.158094   49274 command_runner.go:130] > # device_ownership_from_security_context = false
	I1028 17:58:19.158100   49274 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1028 17:58:19.158108   49274 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1028 17:58:19.158112   49274 command_runner.go:130] > # hooks_dir = [
	I1028 17:58:19.158116   49274 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1028 17:58:19.158121   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158127   49274 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1028 17:58:19.158133   49274 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1028 17:58:19.158138   49274 command_runner.go:130] > # its default mounts from the following two files:
	I1028 17:58:19.158143   49274 command_runner.go:130] > #
	I1028 17:58:19.158149   49274 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1028 17:58:19.158157   49274 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1028 17:58:19.158163   49274 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1028 17:58:19.158168   49274 command_runner.go:130] > #
	I1028 17:58:19.158173   49274 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1028 17:58:19.158182   49274 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1028 17:58:19.158188   49274 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1028 17:58:19.158195   49274 command_runner.go:130] > #      only add mounts it finds in this file.
	I1028 17:58:19.158199   49274 command_runner.go:130] > #
	I1028 17:58:19.158203   49274 command_runner.go:130] > # default_mounts_file = ""
	I1028 17:58:19.158208   49274 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1028 17:58:19.158215   49274 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1028 17:58:19.158221   49274 command_runner.go:130] > pids_limit = 1024
	I1028 17:58:19.158228   49274 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1028 17:58:19.158233   49274 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1028 17:58:19.158241   49274 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1028 17:58:19.158251   49274 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1028 17:58:19.158259   49274 command_runner.go:130] > # log_size_max = -1
	I1028 17:58:19.158270   49274 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1028 17:58:19.158280   49274 command_runner.go:130] > # log_to_journald = false
	I1028 17:58:19.158289   49274 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1028 17:58:19.158300   49274 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1028 17:58:19.158311   49274 command_runner.go:130] > # Path to directory for container attach sockets.
	I1028 17:58:19.158322   49274 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1028 17:58:19.158329   49274 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1028 17:58:19.158340   49274 command_runner.go:130] > # bind_mount_prefix = ""
	I1028 17:58:19.158348   49274 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1028 17:58:19.158352   49274 command_runner.go:130] > # read_only = false
	I1028 17:58:19.158358   49274 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1028 17:58:19.158367   49274 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1028 17:58:19.158371   49274 command_runner.go:130] > # live configuration reload.
	I1028 17:58:19.158376   49274 command_runner.go:130] > # log_level = "info"
	I1028 17:58:19.158385   49274 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1028 17:58:19.158390   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.158396   49274 command_runner.go:130] > # log_filter = ""
	I1028 17:58:19.158402   49274 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1028 17:58:19.158412   49274 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1028 17:58:19.158417   49274 command_runner.go:130] > # separated by comma.
	I1028 17:58:19.158424   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158430   49274 command_runner.go:130] > # uid_mappings = ""
	I1028 17:58:19.158436   49274 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1028 17:58:19.158444   49274 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1028 17:58:19.158449   49274 command_runner.go:130] > # separated by comma.
	I1028 17:58:19.158458   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158462   49274 command_runner.go:130] > # gid_mappings = ""
	I1028 17:58:19.158469   49274 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1028 17:58:19.158478   49274 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 17:58:19.158484   49274 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 17:58:19.158494   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158501   49274 command_runner.go:130] > # minimum_mappable_uid = -1
	I1028 17:58:19.158507   49274 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1028 17:58:19.158515   49274 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 17:58:19.158521   49274 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 17:58:19.158531   49274 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 17:58:19.158536   49274 command_runner.go:130] > # minimum_mappable_gid = -1
	I1028 17:58:19.158544   49274 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1028 17:58:19.158550   49274 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1028 17:58:19.158557   49274 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1028 17:58:19.158561   49274 command_runner.go:130] > # ctr_stop_timeout = 30
	I1028 17:58:19.158569   49274 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1028 17:58:19.158575   49274 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1028 17:58:19.158582   49274 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1028 17:58:19.158587   49274 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1028 17:58:19.158593   49274 command_runner.go:130] > drop_infra_ctr = false
	I1028 17:58:19.158599   49274 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1028 17:58:19.158605   49274 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1028 17:58:19.158612   49274 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1028 17:58:19.158618   49274 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1028 17:58:19.158625   49274 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1028 17:58:19.158632   49274 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1028 17:58:19.158638   49274 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1028 17:58:19.158643   49274 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1028 17:58:19.158647   49274 command_runner.go:130] > # shared_cpuset = ""
	I1028 17:58:19.158653   49274 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1028 17:58:19.158660   49274 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1028 17:58:19.158665   49274 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1028 17:58:19.158673   49274 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1028 17:58:19.158678   49274 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1028 17:58:19.158683   49274 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1028 17:58:19.158689   49274 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1028 17:58:19.158694   49274 command_runner.go:130] > # enable_criu_support = false
	I1028 17:58:19.158700   49274 command_runner.go:130] > # Enable/disable the generation of the container,
	I1028 17:58:19.158708   49274 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1028 17:58:19.158712   49274 command_runner.go:130] > # enable_pod_events = false
	I1028 17:58:19.158721   49274 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 17:58:19.158727   49274 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 17:58:19.158734   49274 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1028 17:58:19.158738   49274 command_runner.go:130] > # default_runtime = "runc"
	I1028 17:58:19.158745   49274 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1028 17:58:19.158752   49274 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1028 17:58:19.158763   49274 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1028 17:58:19.158770   49274 command_runner.go:130] > # creation as a file is not desired either.
	I1028 17:58:19.158778   49274 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1028 17:58:19.158785   49274 command_runner.go:130] > # the hostname is being managed dynamically.
	I1028 17:58:19.158789   49274 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1028 17:58:19.158792   49274 command_runner.go:130] > # ]
	I1028 17:58:19.158798   49274 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1028 17:58:19.158806   49274 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1028 17:58:19.158812   49274 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1028 17:58:19.158820   49274 command_runner.go:130] > # Each entry in the table should follow the format:
	I1028 17:58:19.158823   49274 command_runner.go:130] > #
	I1028 17:58:19.158830   49274 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1028 17:58:19.158835   49274 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1028 17:58:19.158856   49274 command_runner.go:130] > # runtime_type = "oci"
	I1028 17:58:19.158863   49274 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1028 17:58:19.158867   49274 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1028 17:58:19.158872   49274 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1028 17:58:19.158877   49274 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1028 17:58:19.158884   49274 command_runner.go:130] > # monitor_env = []
	I1028 17:58:19.158888   49274 command_runner.go:130] > # privileged_without_host_devices = false
	I1028 17:58:19.158893   49274 command_runner.go:130] > # allowed_annotations = []
	I1028 17:58:19.158900   49274 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1028 17:58:19.158903   49274 command_runner.go:130] > # Where:
	I1028 17:58:19.158909   49274 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1028 17:58:19.158917   49274 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1028 17:58:19.158924   49274 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1028 17:58:19.158932   49274 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1028 17:58:19.158936   49274 command_runner.go:130] > #   in $PATH.
	I1028 17:58:19.158942   49274 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1028 17:58:19.158947   49274 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1028 17:58:19.158953   49274 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1028 17:58:19.158958   49274 command_runner.go:130] > #   state.
	I1028 17:58:19.158964   49274 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1028 17:58:19.158972   49274 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1028 17:58:19.158978   49274 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1028 17:58:19.158986   49274 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1028 17:58:19.158992   49274 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1028 17:58:19.158998   49274 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1028 17:58:19.159003   49274 command_runner.go:130] > #   The currently recognized values are:
	I1028 17:58:19.159009   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1028 17:58:19.159018   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1028 17:58:19.159024   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1028 17:58:19.159032   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1028 17:58:19.159040   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1028 17:58:19.159048   49274 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1028 17:58:19.159054   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1028 17:58:19.159062   49274 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1028 17:58:19.159068   49274 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1028 17:58:19.159076   49274 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1028 17:58:19.159080   49274 command_runner.go:130] > #   deprecated option "conmon".
	I1028 17:58:19.159088   49274 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1028 17:58:19.159096   49274 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1028 17:58:19.159102   49274 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1028 17:58:19.159109   49274 command_runner.go:130] > #   should be moved to the container's cgroup
	I1028 17:58:19.159117   49274 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1028 17:58:19.159124   49274 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1028 17:58:19.159130   49274 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1028 17:58:19.159138   49274 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1028 17:58:19.159141   49274 command_runner.go:130] > #
	I1028 17:58:19.159146   49274 command_runner.go:130] > # Using the seccomp notifier feature:
	I1028 17:58:19.159149   49274 command_runner.go:130] > #
	I1028 17:58:19.159157   49274 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1028 17:58:19.159163   49274 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1028 17:58:19.159168   49274 command_runner.go:130] > #
	I1028 17:58:19.159174   49274 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1028 17:58:19.159180   49274 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1028 17:58:19.159185   49274 command_runner.go:130] > #
	I1028 17:58:19.159190   49274 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1028 17:58:19.159196   49274 command_runner.go:130] > # feature.
	I1028 17:58:19.159198   49274 command_runner.go:130] > #
	I1028 17:58:19.159204   49274 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1028 17:58:19.159212   49274 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1028 17:58:19.159218   49274 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1028 17:58:19.159226   49274 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1028 17:58:19.159232   49274 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1028 17:58:19.159237   49274 command_runner.go:130] > #
	I1028 17:58:19.159243   49274 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1028 17:58:19.159252   49274 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1028 17:58:19.159260   49274 command_runner.go:130] > #
	I1028 17:58:19.159269   49274 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1028 17:58:19.159280   49274 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1028 17:58:19.159285   49274 command_runner.go:130] > #
	I1028 17:58:19.159296   49274 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1028 17:58:19.159308   49274 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1028 17:58:19.159316   49274 command_runner.go:130] > # limitation.
	I1028 17:58:19.159325   49274 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1028 17:58:19.159334   49274 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1028 17:58:19.159347   49274 command_runner.go:130] > runtime_type = "oci"
	I1028 17:58:19.159351   49274 command_runner.go:130] > runtime_root = "/run/runc"
	I1028 17:58:19.159356   49274 command_runner.go:130] > runtime_config_path = ""
	I1028 17:58:19.159361   49274 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1028 17:58:19.159367   49274 command_runner.go:130] > monitor_cgroup = "pod"
	I1028 17:58:19.159371   49274 command_runner.go:130] > monitor_exec_cgroup = ""
	I1028 17:58:19.159374   49274 command_runner.go:130] > monitor_env = [
	I1028 17:58:19.159380   49274 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 17:58:19.159385   49274 command_runner.go:130] > ]
	I1028 17:58:19.159392   49274 command_runner.go:130] > privileged_without_host_devices = false
	I1028 17:58:19.159400   49274 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1028 17:58:19.159406   49274 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1028 17:58:19.159414   49274 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1028 17:58:19.159422   49274 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1028 17:58:19.159432   49274 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1028 17:58:19.159440   49274 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1028 17:58:19.159449   49274 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1028 17:58:19.159458   49274 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1028 17:58:19.159464   49274 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1028 17:58:19.159473   49274 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1028 17:58:19.159478   49274 command_runner.go:130] > # Example:
	I1028 17:58:19.159482   49274 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1028 17:58:19.159489   49274 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1028 17:58:19.159493   49274 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1028 17:58:19.159501   49274 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1028 17:58:19.159505   49274 command_runner.go:130] > # cpuset = 0
	I1028 17:58:19.159511   49274 command_runner.go:130] > # cpushares = "0-1"
	I1028 17:58:19.159515   49274 command_runner.go:130] > # Where:
	I1028 17:58:19.159521   49274 command_runner.go:130] > # The workload name is workload-type.
	I1028 17:58:19.159527   49274 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1028 17:58:19.159534   49274 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1028 17:58:19.159540   49274 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1028 17:58:19.159550   49274 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1028 17:58:19.159555   49274 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1028 17:58:19.159561   49274 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1028 17:58:19.159568   49274 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1028 17:58:19.159575   49274 command_runner.go:130] > # Default value is set to true
	I1028 17:58:19.159579   49274 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1028 17:58:19.159588   49274 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1028 17:58:19.159592   49274 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1028 17:58:19.159599   49274 command_runner.go:130] > # Default value is set to 'false'
	I1028 17:58:19.159603   49274 command_runner.go:130] > # disable_hostport_mapping = false
	I1028 17:58:19.159609   49274 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1028 17:58:19.159612   49274 command_runner.go:130] > #
	I1028 17:58:19.159618   49274 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1028 17:58:19.159624   49274 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1028 17:58:19.159630   49274 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1028 17:58:19.159636   49274 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1028 17:58:19.159641   49274 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1028 17:58:19.159644   49274 command_runner.go:130] > [crio.image]
	I1028 17:58:19.159650   49274 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1028 17:58:19.159655   49274 command_runner.go:130] > # default_transport = "docker://"
	I1028 17:58:19.159660   49274 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1028 17:58:19.159666   49274 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1028 17:58:19.159670   49274 command_runner.go:130] > # global_auth_file = ""
	I1028 17:58:19.159674   49274 command_runner.go:130] > # The image used to instantiate infra containers.
	I1028 17:58:19.159679   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.159683   49274 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1028 17:58:19.159690   49274 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1028 17:58:19.159695   49274 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1028 17:58:19.159700   49274 command_runner.go:130] > # This option supports live configuration reload.
	I1028 17:58:19.159706   49274 command_runner.go:130] > # pause_image_auth_file = ""
	I1028 17:58:19.159712   49274 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1028 17:58:19.159718   49274 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1028 17:58:19.159725   49274 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1028 17:58:19.159731   49274 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1028 17:58:19.159738   49274 command_runner.go:130] > # pause_command = "/pause"
	I1028 17:58:19.159743   49274 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1028 17:58:19.159749   49274 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1028 17:58:19.159756   49274 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1028 17:58:19.159765   49274 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1028 17:58:19.159771   49274 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1028 17:58:19.159778   49274 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1028 17:58:19.159784   49274 command_runner.go:130] > # pinned_images = [
	I1028 17:58:19.159788   49274 command_runner.go:130] > # ]
	I1028 17:58:19.159796   49274 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1028 17:58:19.159802   49274 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1028 17:58:19.159810   49274 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1028 17:58:19.159818   49274 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1028 17:58:19.159824   49274 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1028 17:58:19.159830   49274 command_runner.go:130] > # signature_policy = ""
	I1028 17:58:19.159835   49274 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1028 17:58:19.159842   49274 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1028 17:58:19.159851   49274 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1028 17:58:19.159857   49274 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1028 17:58:19.159865   49274 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1028 17:58:19.159869   49274 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1028 17:58:19.159878   49274 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1028 17:58:19.159884   49274 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1028 17:58:19.159890   49274 command_runner.go:130] > # changing them here.
	I1028 17:58:19.159895   49274 command_runner.go:130] > # insecure_registries = [
	I1028 17:58:19.159900   49274 command_runner.go:130] > # ]
	I1028 17:58:19.159906   49274 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1028 17:58:19.159913   49274 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1028 17:58:19.159917   49274 command_runner.go:130] > # image_volumes = "mkdir"
	I1028 17:58:19.159923   49274 command_runner.go:130] > # Temporary directory to use for storing big files
	I1028 17:58:19.159927   49274 command_runner.go:130] > # big_files_temporary_dir = ""
	I1028 17:58:19.159933   49274 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1028 17:58:19.159939   49274 command_runner.go:130] > # CNI plugins.
	I1028 17:58:19.159942   49274 command_runner.go:130] > [crio.network]
	I1028 17:58:19.159948   49274 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1028 17:58:19.159955   49274 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1028 17:58:19.159960   49274 command_runner.go:130] > # cni_default_network = ""
	I1028 17:58:19.159969   49274 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1028 17:58:19.159973   49274 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1028 17:58:19.159978   49274 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1028 17:58:19.159984   49274 command_runner.go:130] > # plugin_dirs = [
	I1028 17:58:19.159987   49274 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1028 17:58:19.159991   49274 command_runner.go:130] > # ]
	I1028 17:58:19.159996   49274 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1028 17:58:19.160002   49274 command_runner.go:130] > [crio.metrics]
	I1028 17:58:19.160007   49274 command_runner.go:130] > # Globally enable or disable metrics support.
	I1028 17:58:19.160013   49274 command_runner.go:130] > enable_metrics = true
	I1028 17:58:19.160017   49274 command_runner.go:130] > # Specify enabled metrics collectors.
	I1028 17:58:19.160022   49274 command_runner.go:130] > # Per default all metrics are enabled.
	I1028 17:58:19.160030   49274 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1028 17:58:19.160037   49274 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1028 17:58:19.160045   49274 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1028 17:58:19.160049   49274 command_runner.go:130] > # metrics_collectors = [
	I1028 17:58:19.160055   49274 command_runner.go:130] > # 	"operations",
	I1028 17:58:19.160059   49274 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1028 17:58:19.160064   49274 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1028 17:58:19.160070   49274 command_runner.go:130] > # 	"operations_errors",
	I1028 17:58:19.160074   49274 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1028 17:58:19.160083   49274 command_runner.go:130] > # 	"image_pulls_by_name",
	I1028 17:58:19.160090   49274 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1028 17:58:19.160096   49274 command_runner.go:130] > # 	"image_pulls_failures",
	I1028 17:58:19.160100   49274 command_runner.go:130] > # 	"image_pulls_successes",
	I1028 17:58:19.160107   49274 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1028 17:58:19.160111   49274 command_runner.go:130] > # 	"image_layer_reuse",
	I1028 17:58:19.160115   49274 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1028 17:58:19.160119   49274 command_runner.go:130] > # 	"containers_oom_total",
	I1028 17:58:19.160123   49274 command_runner.go:130] > # 	"containers_oom",
	I1028 17:58:19.160128   49274 command_runner.go:130] > # 	"processes_defunct",
	I1028 17:58:19.160131   49274 command_runner.go:130] > # 	"operations_total",
	I1028 17:58:19.160136   49274 command_runner.go:130] > # 	"operations_latency_seconds",
	I1028 17:58:19.160143   49274 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1028 17:58:19.160147   49274 command_runner.go:130] > # 	"operations_errors_total",
	I1028 17:58:19.160152   49274 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1028 17:58:19.160158   49274 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1028 17:58:19.160162   49274 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1028 17:58:19.160168   49274 command_runner.go:130] > # 	"image_pulls_success_total",
	I1028 17:58:19.160172   49274 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1028 17:58:19.160176   49274 command_runner.go:130] > # 	"containers_oom_count_total",
	I1028 17:58:19.160181   49274 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1028 17:58:19.160185   49274 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1028 17:58:19.160191   49274 command_runner.go:130] > # ]
	I1028 17:58:19.160196   49274 command_runner.go:130] > # The port on which the metrics server will listen.
	I1028 17:58:19.160203   49274 command_runner.go:130] > # metrics_port = 9090
	I1028 17:58:19.160208   49274 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1028 17:58:19.160212   49274 command_runner.go:130] > # metrics_socket = ""
	I1028 17:58:19.160217   49274 command_runner.go:130] > # The certificate for the secure metrics server.
	I1028 17:58:19.160223   49274 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1028 17:58:19.160230   49274 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1028 17:58:19.160234   49274 command_runner.go:130] > # certificate on any modification event.
	I1028 17:58:19.160240   49274 command_runner.go:130] > # metrics_cert = ""
	I1028 17:58:19.160245   49274 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1028 17:58:19.160253   49274 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1028 17:58:19.160260   49274 command_runner.go:130] > # metrics_key = ""
	I1028 17:58:19.160271   49274 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1028 17:58:19.160280   49274 command_runner.go:130] > [crio.tracing]
	I1028 17:58:19.160289   49274 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1028 17:58:19.160298   49274 command_runner.go:130] > # enable_tracing = false
	I1028 17:58:19.160306   49274 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1028 17:58:19.160316   49274 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1028 17:58:19.160326   49274 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1028 17:58:19.160340   49274 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1028 17:58:19.160349   49274 command_runner.go:130] > # CRI-O NRI configuration.
	I1028 17:58:19.160356   49274 command_runner.go:130] > [crio.nri]
	I1028 17:58:19.160363   49274 command_runner.go:130] > # Globally enable or disable NRI.
	I1028 17:58:19.160367   49274 command_runner.go:130] > # enable_nri = false
	I1028 17:58:19.160373   49274 command_runner.go:130] > # NRI socket to listen on.
	I1028 17:58:19.160378   49274 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1028 17:58:19.160384   49274 command_runner.go:130] > # NRI plugin directory to use.
	I1028 17:58:19.160389   49274 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1028 17:58:19.160396   49274 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1028 17:58:19.160401   49274 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1028 17:58:19.160406   49274 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1028 17:58:19.160413   49274 command_runner.go:130] > # nri_disable_connections = false
	I1028 17:58:19.160418   49274 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1028 17:58:19.160423   49274 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1028 17:58:19.160429   49274 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1028 17:58:19.160435   49274 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1028 17:58:19.160441   49274 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1028 17:58:19.160447   49274 command_runner.go:130] > [crio.stats]
	I1028 17:58:19.160453   49274 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1028 17:58:19.160461   49274 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1028 17:58:19.160465   49274 command_runner.go:130] > # stats_collection_period = 0
	I1028 17:58:19.160555   49274 cni.go:84] Creating CNI manager for ""
	I1028 17:58:19.160569   49274 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 17:58:19.160580   49274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 17:58:19.160605   49274 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-949956 NodeName:multinode-949956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 17:58:19.160717   49274 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-949956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.203"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 17:58:19.160774   49274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 17:58:19.170814   49274 command_runner.go:130] > kubeadm
	I1028 17:58:19.170832   49274 command_runner.go:130] > kubectl
	I1028 17:58:19.170837   49274 command_runner.go:130] > kubelet
	I1028 17:58:19.170925   49274 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 17:58:19.170977   49274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 17:58:19.180017   49274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 17:58:19.196276   49274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 17:58:19.211917   49274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1028 17:58:19.227954   49274 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I1028 17:58:19.231678   49274 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I1028 17:58:19.231715   49274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 17:58:19.367105   49274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 17:58:19.382093   49274 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956 for IP: 192.168.39.203
	I1028 17:58:19.382114   49274 certs.go:194] generating shared ca certs ...
	I1028 17:58:19.382131   49274 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 17:58:19.382298   49274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 17:58:19.382354   49274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 17:58:19.382370   49274 certs.go:256] generating profile certs ...
	I1028 17:58:19.382487   49274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/client.key
	I1028 17:58:19.382560   49274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.key.00aa27e5
	I1028 17:58:19.382607   49274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.key
	I1028 17:58:19.382627   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 17:58:19.382648   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 17:58:19.382665   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 17:58:19.382681   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 17:58:19.382696   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 17:58:19.382715   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 17:58:19.382732   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 17:58:19.382751   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 17:58:19.382820   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 17:58:19.382869   49274 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 17:58:19.382884   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 17:58:19.382912   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 17:58:19.382945   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 17:58:19.382975   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 17:58:19.383032   49274 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 17:58:19.383068   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.383088   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem -> /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.383106   49274 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.383724   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 17:58:19.408076   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 17:58:19.431343   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 17:58:19.454534   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 17:58:19.477441   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 17:58:19.500412   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 17:58:19.524138   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 17:58:19.547533   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/multinode-949956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 17:58:19.570628   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 17:58:19.593563   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 17:58:19.621654   49274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 17:58:19.644520   49274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 17:58:19.660332   49274 ssh_runner.go:195] Run: openssl version
	I1028 17:58:19.665928   49274 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 17:58:19.666091   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 17:58:19.676403   49274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.680748   49274 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.680918   49274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.680961   49274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 17:58:19.686441   49274 command_runner.go:130] > 3ec20f2e
	I1028 17:58:19.686497   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 17:58:19.695227   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 17:58:19.705265   49274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.709474   49274 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.709626   49274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.709671   49274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 17:58:19.714897   49274 command_runner.go:130] > b5213941
	I1028 17:58:19.715087   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 17:58:19.723727   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 17:58:19.733811   49274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.738135   49274 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.738428   49274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.738467   49274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 17:58:19.743815   49274 command_runner.go:130] > 51391683
	I1028 17:58:19.743865   49274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 17:58:19.752625   49274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:58:19.756995   49274 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 17:58:19.757031   49274 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 17:58:19.757040   49274 command_runner.go:130] > Device: 253,1	Inode: 532782      Links: 1
	I1028 17:58:19.757052   49274 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 17:58:19.757065   49274 command_runner.go:130] > Access: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757073   49274 command_runner.go:130] > Modify: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757083   49274 command_runner.go:130] > Change: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757093   49274 command_runner.go:130] >  Birth: 2024-10-28 17:51:19.202907260 +0000
	I1028 17:58:19.757131   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 17:58:19.762683   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.762737   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 17:58:19.767989   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.768159   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 17:58:19.773529   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.773584   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 17:58:19.778825   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.778997   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 17:58:19.784416   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.784479   49274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 17:58:19.789781   49274 command_runner.go:130] > Certificate will not expire
	I1028 17:58:19.789869   49274 kubeadm.go:392] StartCluster: {Name:multinode-949956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-949956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:58:19.790005   49274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 17:58:19.790042   49274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 17:58:19.826311   49274 command_runner.go:130] > 03d9d60b5a766544e09d26870287f8e1126cb621ed21d15ffd8b49316e74b88a
	I1028 17:58:19.826347   49274 command_runner.go:130] > f561182f7915f4870775ac8540f78f768c4b6604993a02d6fee46c33b43858e3
	I1028 17:58:19.826356   49274 command_runner.go:130] > 5500ceb27304706a4e21106a195e38a6d57c4ee046146168e8d435a2ceadf143
	I1028 17:58:19.826366   49274 command_runner.go:130] > fd134912be1f33eb5df2f51c1c091b2782b720093e8145a73fc3ced1ed3d61b0
	I1028 17:58:19.826374   49274 command_runner.go:130] > 6be4f0150414ecb719308e654dfe475cc60d922a30553913db6f21c791604523
	I1028 17:58:19.826387   49274 command_runner.go:130] > 4f1fcae7239a1074023a23c8ca05de17f39ebad262a1d6e58d4752e0649431a2
	I1028 17:58:19.826396   49274 command_runner.go:130] > a878b44f1390e731efe4ea8becae131923aee9984a263360abcde7ab1efbaf4c
	I1028 17:58:19.826406   49274 command_runner.go:130] > c8c4b6d9475bbd1e1e80a611f61fe02c69d83a9a3f482001baf1517cf848d1c5
	I1028 17:58:19.826433   49274 cri.go:89] found id: "03d9d60b5a766544e09d26870287f8e1126cb621ed21d15ffd8b49316e74b88a"
	I1028 17:58:19.826444   49274 cri.go:89] found id: "f561182f7915f4870775ac8540f78f768c4b6604993a02d6fee46c33b43858e3"
	I1028 17:58:19.826449   49274 cri.go:89] found id: "5500ceb27304706a4e21106a195e38a6d57c4ee046146168e8d435a2ceadf143"
	I1028 17:58:19.826454   49274 cri.go:89] found id: "fd134912be1f33eb5df2f51c1c091b2782b720093e8145a73fc3ced1ed3d61b0"
	I1028 17:58:19.826461   49274 cri.go:89] found id: "6be4f0150414ecb719308e654dfe475cc60d922a30553913db6f21c791604523"
	I1028 17:58:19.826466   49274 cri.go:89] found id: "4f1fcae7239a1074023a23c8ca05de17f39ebad262a1d6e58d4752e0649431a2"
	I1028 17:58:19.826473   49274 cri.go:89] found id: "a878b44f1390e731efe4ea8becae131923aee9984a263360abcde7ab1efbaf4c"
	I1028 17:58:19.826477   49274 cri.go:89] found id: "c8c4b6d9475bbd1e1e80a611f61fe02c69d83a9a3f482001baf1517cf848d1c5"
	I1028 17:58:19.826480   49274 cri.go:89] found id: ""
	I1028 17:58:19.826513   49274 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-949956 -n multinode-949956
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-949956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.19s)

                                                
                                    
x
+
TestPreload (239.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-598338 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1028 18:08:38.394935   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-598338 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m26.975218122s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-598338 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-598338 image pull gcr.io/k8s-minikube/busybox: (5.962427241s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-598338
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-598338: (7.285452539s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-598338 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1028 18:10:16.508644   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-598338 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.413409227s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-598338 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-28 18:10:21.717937204 +0000 UTC m=+3861.175981522
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-598338 -n test-preload-598338
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-598338 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-598338 logs -n 25: (1.034283563s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956 sudo cat                                       | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m03_multinode-949956.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt                       | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m02:/home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n                                                                 | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | multinode-949956-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-949956 ssh -n multinode-949956-m02 sudo cat                                   | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	|         | /home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-949956 node stop m03                                                          | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:53 UTC |
	| node    | multinode-949956 node start                                                             | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:53 UTC | 28 Oct 24 17:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-949956                                                                | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:54 UTC |                     |
	| stop    | -p multinode-949956                                                                     | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:54 UTC |                     |
	| start   | -p multinode-949956                                                                     | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 17:56 UTC | 28 Oct 24 18:00 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-949956                                                                | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 18:00 UTC |                     |
	| node    | multinode-949956 node delete                                                            | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 18:00 UTC | 28 Oct 24 18:00 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-949956 stop                                                                   | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 18:00 UTC |                     |
	| start   | -p multinode-949956                                                                     | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 18:02 UTC | 28 Oct 24 18:05 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-949956                                                                | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 18:05 UTC |                     |
	| start   | -p multinode-949956-m02                                                                 | multinode-949956-m02 | jenkins | v1.34.0 | 28 Oct 24 18:05 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-949956-m03                                                                 | multinode-949956-m03 | jenkins | v1.34.0 | 28 Oct 24 18:05 UTC | 28 Oct 24 18:06 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-949956                                                                 | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 18:06 UTC |                     |
	| delete  | -p multinode-949956-m03                                                                 | multinode-949956-m03 | jenkins | v1.34.0 | 28 Oct 24 18:06 UTC | 28 Oct 24 18:06 UTC |
	| delete  | -p multinode-949956                                                                     | multinode-949956     | jenkins | v1.34.0 | 28 Oct 24 18:06 UTC | 28 Oct 24 18:06 UTC |
	| start   | -p test-preload-598338                                                                  | test-preload-598338  | jenkins | v1.34.0 | 28 Oct 24 18:06 UTC | 28 Oct 24 18:08 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-598338 image pull                                                          | test-preload-598338  | jenkins | v1.34.0 | 28 Oct 24 18:08 UTC | 28 Oct 24 18:08 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-598338                                                                  | test-preload-598338  | jenkins | v1.34.0 | 28 Oct 24 18:08 UTC | 28 Oct 24 18:09 UTC |
	| start   | -p test-preload-598338                                                                  | test-preload-598338  | jenkins | v1.34.0 | 28 Oct 24 18:09 UTC | 28 Oct 24 18:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-598338 image list                                                          | test-preload-598338  | jenkins | v1.34.0 | 28 Oct 24 18:10 UTC | 28 Oct 24 18:10 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:09:05
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:09:05.119689   53868 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:09:05.119901   53868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:09:05.119908   53868 out.go:358] Setting ErrFile to fd 2...
	I1028 18:09:05.119913   53868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:09:05.120085   53868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:09:05.120599   53868 out.go:352] Setting JSON to false
	I1028 18:09:05.121430   53868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6688,"bootTime":1730132257,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:09:05.121519   53868 start.go:139] virtualization: kvm guest
	I1028 18:09:05.123527   53868 out.go:177] * [test-preload-598338] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:09:05.125339   53868 notify.go:220] Checking for updates...
	I1028 18:09:05.125358   53868 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:09:05.126668   53868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:09:05.128029   53868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:09:05.129318   53868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:09:05.130720   53868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:09:05.132227   53868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:09:05.133871   53868 config.go:182] Loaded profile config "test-preload-598338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1028 18:09:05.134246   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:09:05.134304   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:09:05.148497   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I1028 18:09:05.148872   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:09:05.149359   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:09:05.149378   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:09:05.149759   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:09:05.149925   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:05.151410   53868 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 18:09:05.152655   53868 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:09:05.152980   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:09:05.153018   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:09:05.167015   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I1028 18:09:05.167436   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:09:05.167870   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:09:05.167889   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:09:05.168157   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:09:05.168323   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:05.200735   53868 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:09:05.202083   53868 start.go:297] selected driver: kvm2
	I1028 18:09:05.202093   53868 start.go:901] validating driver "kvm2" against &{Name:test-preload-598338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-598338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:09:05.202177   53868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:09:05.202833   53868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:09:05.202906   53868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:09:05.216990   53868 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:09:05.217299   53868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:09:05.217324   53868 cni.go:84] Creating CNI manager for ""
	I1028 18:09:05.217371   53868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:09:05.217418   53868 start.go:340] cluster config:
	{Name:test-preload-598338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-598338 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:09:05.217524   53868 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:09:05.219716   53868 out.go:177] * Starting "test-preload-598338" primary control-plane node in "test-preload-598338" cluster
	I1028 18:09:05.220948   53868 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1028 18:09:05.950975   53868 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1028 18:09:05.951005   53868 cache.go:56] Caching tarball of preloaded images
	I1028 18:09:05.951171   53868 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1028 18:09:05.953362   53868 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1028 18:09:05.954864   53868 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1028 18:09:06.111616   53868 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1028 18:09:28.577696   53868 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1028 18:09:28.577813   53868 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1028 18:09:29.412968   53868 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1028 18:09:29.413115   53868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/config.json ...
	I1028 18:09:29.413366   53868 start.go:360] acquireMachinesLock for test-preload-598338: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:09:29.413446   53868 start.go:364] duration metric: took 55.886µs to acquireMachinesLock for "test-preload-598338"
	I1028 18:09:29.413468   53868 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:09:29.413476   53868 fix.go:54] fixHost starting: 
	I1028 18:09:29.413767   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:09:29.413812   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:09:29.428271   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1028 18:09:29.428727   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:09:29.429191   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:09:29.429211   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:09:29.429538   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:09:29.429709   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:29.429853   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetState
	I1028 18:09:29.431249   53868 fix.go:112] recreateIfNeeded on test-preload-598338: state=Stopped err=<nil>
	I1028 18:09:29.431281   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	W1028 18:09:29.431399   53868 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:09:29.433255   53868 out.go:177] * Restarting existing kvm2 VM for "test-preload-598338" ...
	I1028 18:09:29.434604   53868 main.go:141] libmachine: (test-preload-598338) Calling .Start
	I1028 18:09:29.434730   53868 main.go:141] libmachine: (test-preload-598338) Ensuring networks are active...
	I1028 18:09:29.435352   53868 main.go:141] libmachine: (test-preload-598338) Ensuring network default is active
	I1028 18:09:29.435565   53868 main.go:141] libmachine: (test-preload-598338) Ensuring network mk-test-preload-598338 is active
	I1028 18:09:29.435823   53868 main.go:141] libmachine: (test-preload-598338) Getting domain xml...
	I1028 18:09:29.436440   53868 main.go:141] libmachine: (test-preload-598338) Creating domain...
	I1028 18:09:30.605656   53868 main.go:141] libmachine: (test-preload-598338) Waiting to get IP...
	I1028 18:09:30.606414   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:30.606744   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:30.606834   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:30.606746   53983 retry.go:31] will retry after 200.869024ms: waiting for machine to come up
	I1028 18:09:30.809262   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:30.809620   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:30.809643   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:30.809578   53983 retry.go:31] will retry after 254.647693ms: waiting for machine to come up
	I1028 18:09:31.065951   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:31.066328   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:31.066358   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:31.066274   53983 retry.go:31] will retry after 402.028209ms: waiting for machine to come up
	I1028 18:09:31.469638   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:31.470058   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:31.470088   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:31.470006   53983 retry.go:31] will retry after 504.480037ms: waiting for machine to come up
	I1028 18:09:31.975542   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:31.975929   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:31.975955   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:31.975890   53983 retry.go:31] will retry after 590.750217ms: waiting for machine to come up
	I1028 18:09:32.568486   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:32.568863   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:32.568881   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:32.568838   53983 retry.go:31] will retry after 897.716423ms: waiting for machine to come up
	I1028 18:09:33.467628   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:33.468087   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:33.468121   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:33.468033   53983 retry.go:31] will retry after 881.247222ms: waiting for machine to come up
	I1028 18:09:34.350961   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:34.351405   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:34.351435   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:34.351350   53983 retry.go:31] will retry after 1.387331745s: waiting for machine to come up
	I1028 18:09:35.740170   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:35.740502   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:35.740542   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:35.740412   53983 retry.go:31] will retry after 1.831714134s: waiting for machine to come up
	I1028 18:09:37.574319   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:37.574728   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:37.574756   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:37.574699   53983 retry.go:31] will retry after 1.684808753s: waiting for machine to come up
	I1028 18:09:39.261508   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:39.261875   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:39.261902   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:39.261829   53983 retry.go:31] will retry after 1.901846447s: waiting for machine to come up
	I1028 18:09:41.165471   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:41.166071   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:41.166100   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:41.166014   53983 retry.go:31] will retry after 2.933551966s: waiting for machine to come up
	I1028 18:09:44.103084   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:44.103516   53868 main.go:141] libmachine: (test-preload-598338) DBG | unable to find current IP address of domain test-preload-598338 in network mk-test-preload-598338
	I1028 18:09:44.103545   53868 main.go:141] libmachine: (test-preload-598338) DBG | I1028 18:09:44.103478   53983 retry.go:31] will retry after 2.8470477s: waiting for machine to come up
	I1028 18:09:46.953625   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:46.954000   53868 main.go:141] libmachine: (test-preload-598338) Found IP for machine: 192.168.39.7
	I1028 18:09:46.954019   53868 main.go:141] libmachine: (test-preload-598338) Reserving static IP address...
	I1028 18:09:46.954030   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has current primary IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:46.954406   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "test-preload-598338", mac: "52:54:00:99:f5:eb", ip: "192.168.39.7"} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:46.954431   53868 main.go:141] libmachine: (test-preload-598338) DBG | skip adding static IP to network mk-test-preload-598338 - found existing host DHCP lease matching {name: "test-preload-598338", mac: "52:54:00:99:f5:eb", ip: "192.168.39.7"}
	I1028 18:09:46.954444   53868 main.go:141] libmachine: (test-preload-598338) Reserved static IP address: 192.168.39.7
	I1028 18:09:46.954462   53868 main.go:141] libmachine: (test-preload-598338) Waiting for SSH to be available...
	I1028 18:09:46.954479   53868 main.go:141] libmachine: (test-preload-598338) DBG | Getting to WaitForSSH function...
	I1028 18:09:46.956503   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:46.956809   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:46.956839   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:46.956910   53868 main.go:141] libmachine: (test-preload-598338) DBG | Using SSH client type: external
	I1028 18:09:46.956944   53868 main.go:141] libmachine: (test-preload-598338) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa (-rw-------)
	I1028 18:09:46.956978   53868 main.go:141] libmachine: (test-preload-598338) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:09:46.956990   53868 main.go:141] libmachine: (test-preload-598338) DBG | About to run SSH command:
	I1028 18:09:46.957004   53868 main.go:141] libmachine: (test-preload-598338) DBG | exit 0
	I1028 18:09:47.087377   53868 main.go:141] libmachine: (test-preload-598338) DBG | SSH cmd err, output: <nil>: 
	I1028 18:09:47.087769   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetConfigRaw
	I1028 18:09:47.088504   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetIP
	I1028 18:09:47.091321   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.091732   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.091754   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.091998   53868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/config.json ...
	I1028 18:09:47.092217   53868 machine.go:93] provisionDockerMachine start ...
	I1028 18:09:47.092235   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:47.092432   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:47.094782   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.095111   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.095138   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.095291   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:47.095459   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.095602   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.095731   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:47.095880   53868 main.go:141] libmachine: Using SSH client type: native
	I1028 18:09:47.096051   53868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1028 18:09:47.096062   53868 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:09:47.208411   53868 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:09:47.208441   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetMachineName
	I1028 18:09:47.208677   53868 buildroot.go:166] provisioning hostname "test-preload-598338"
	I1028 18:09:47.208711   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetMachineName
	I1028 18:09:47.208873   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:47.211180   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.211544   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.211571   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.211703   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:47.211854   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.212012   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.212154   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:47.212277   53868 main.go:141] libmachine: Using SSH client type: native
	I1028 18:09:47.212500   53868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1028 18:09:47.212516   53868 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-598338 && echo "test-preload-598338" | sudo tee /etc/hostname
	I1028 18:09:47.337462   53868 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-598338
	
	I1028 18:09:47.337494   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:47.340058   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.340347   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.340375   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.340509   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:47.340662   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.340786   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.340932   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:47.341242   53868 main.go:141] libmachine: Using SSH client type: native
	I1028 18:09:47.341390   53868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1028 18:09:47.341404   53868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-598338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-598338/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-598338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:09:47.460739   53868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:09:47.460769   53868 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:09:47.460793   53868 buildroot.go:174] setting up certificates
	I1028 18:09:47.460807   53868 provision.go:84] configureAuth start
	I1028 18:09:47.460819   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetMachineName
	I1028 18:09:47.461074   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetIP
	I1028 18:09:47.463312   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.463585   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.463610   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.463764   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:47.465812   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.466084   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.466122   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.466215   53868 provision.go:143] copyHostCerts
	I1028 18:09:47.466274   53868 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:09:47.466285   53868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:09:47.466356   53868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:09:47.466461   53868 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:09:47.466468   53868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:09:47.466492   53868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:09:47.466548   53868 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:09:47.466556   53868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:09:47.466575   53868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:09:47.466621   53868 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.test-preload-598338 san=[127.0.0.1 192.168.39.7 localhost minikube test-preload-598338]
	I1028 18:09:47.679066   53868 provision.go:177] copyRemoteCerts
	I1028 18:09:47.679120   53868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:09:47.679145   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:47.681527   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.681877   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.681905   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.682042   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:47.682200   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.682334   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:47.682452   53868 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa Username:docker}
	I1028 18:09:47.766042   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:09:47.789803   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 18:09:47.812669   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:09:47.835118   53868 provision.go:87] duration metric: took 374.299218ms to configureAuth
	I1028 18:09:47.835152   53868 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:09:47.835348   53868 config.go:182] Loaded profile config "test-preload-598338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1028 18:09:47.835424   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:47.837731   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.838062   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:47.838093   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:47.838218   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:47.838407   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.838561   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:47.838692   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:47.838816   53868 main.go:141] libmachine: Using SSH client type: native
	I1028 18:09:47.838957   53868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1028 18:09:47.838971   53868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:09:48.064415   53868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:09:48.064437   53868 machine.go:96] duration metric: took 972.207707ms to provisionDockerMachine
	I1028 18:09:48.064447   53868 start.go:293] postStartSetup for "test-preload-598338" (driver="kvm2")
	I1028 18:09:48.064457   53868 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:09:48.064486   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:48.064755   53868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:09:48.064782   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:48.067155   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.067465   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:48.067493   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.067580   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:48.067727   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:48.067882   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:48.068042   53868 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa Username:docker}
	I1028 18:09:48.154523   53868 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:09:48.158698   53868 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:09:48.158721   53868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:09:48.158788   53868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:09:48.158890   53868 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:09:48.159004   53868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:09:48.167830   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:09:48.191019   53868 start.go:296] duration metric: took 126.560033ms for postStartSetup
	I1028 18:09:48.191065   53868 fix.go:56] duration metric: took 18.777590057s for fixHost
	I1028 18:09:48.191083   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:48.193401   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.193699   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:48.193727   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.193884   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:48.194076   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:48.194224   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:48.194355   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:48.194495   53868 main.go:141] libmachine: Using SSH client type: native
	I1028 18:09:48.194649   53868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1028 18:09:48.194659   53868 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:09:48.304972   53868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730138988.273742377
	
	I1028 18:09:48.304995   53868 fix.go:216] guest clock: 1730138988.273742377
	I1028 18:09:48.305002   53868 fix.go:229] Guest: 2024-10-28 18:09:48.273742377 +0000 UTC Remote: 2024-10-28 18:09:48.191068619 +0000 UTC m=+43.107316582 (delta=82.673758ms)
	I1028 18:09:48.305020   53868 fix.go:200] guest clock delta is within tolerance: 82.673758ms
	I1028 18:09:48.305034   53868 start.go:83] releasing machines lock for "test-preload-598338", held for 18.891566442s
	I1028 18:09:48.305051   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:48.305281   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetIP
	I1028 18:09:48.307540   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.307821   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:48.307842   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.308001   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:48.308434   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:48.308611   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:09:48.308688   53868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:09:48.308723   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:48.308779   53868 ssh_runner.go:195] Run: cat /version.json
	I1028 18:09:48.308799   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:09:48.311088   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.311429   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:48.311447   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.311464   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.311634   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:48.311793   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:48.311919   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:48.311941   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:48.311943   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:48.312056   53868 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa Username:docker}
	I1028 18:09:48.312100   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:09:48.312221   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:09:48.312324   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:09:48.312513   53868 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa Username:docker}
	I1028 18:09:48.415584   53868 ssh_runner.go:195] Run: systemctl --version
	I1028 18:09:48.421174   53868 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:09:48.569787   53868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:09:48.575440   53868 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:09:48.575503   53868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:09:48.591166   53868 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:09:48.591185   53868 start.go:495] detecting cgroup driver to use...
	I1028 18:09:48.591239   53868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:09:48.606187   53868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:09:48.619276   53868 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:09:48.619321   53868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:09:48.631827   53868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:09:48.644830   53868 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:09:48.753123   53868 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:09:48.900460   53868 docker.go:233] disabling docker service ...
	I1028 18:09:48.900545   53868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:09:48.914656   53868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:09:48.927414   53868 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:09:49.055845   53868 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:09:49.189606   53868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:09:49.203716   53868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:09:49.222334   53868 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1028 18:09:49.222403   53868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:09:49.232800   53868 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:09:49.232857   53868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:09:49.243249   53868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:09:49.253391   53868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:09:49.263517   53868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:09:49.273606   53868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:09:49.283369   53868 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:09:49.299773   53868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:09:49.309545   53868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:09:49.318380   53868 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:09:49.318429   53868 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:09:49.330376   53868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:09:49.339127   53868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:09:49.463252   53868 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:09:49.644372   53868 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:09:49.644436   53868 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:09:49.649052   53868 start.go:563] Will wait 60s for crictl version
	I1028 18:09:49.649104   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:49.652791   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:09:49.691559   53868 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:09:49.691641   53868 ssh_runner.go:195] Run: crio --version
	I1028 18:09:49.718401   53868 ssh_runner.go:195] Run: crio --version
	I1028 18:09:49.747305   53868 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1028 18:09:49.748512   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetIP
	I1028 18:09:49.751070   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:49.751408   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:09:49.751437   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:09:49.751622   53868 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 18:09:49.755704   53868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:09:49.767778   53868 kubeadm.go:883] updating cluster {Name:test-preload-598338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-598338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:09:49.767879   53868 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1028 18:09:49.767917   53868 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:09:49.802656   53868 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1028 18:09:49.802714   53868 ssh_runner.go:195] Run: which lz4
	I1028 18:09:49.806710   53868 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:09:49.810848   53868 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:09:49.810876   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1028 18:09:51.297780   53868 crio.go:462] duration metric: took 1.491091261s to copy over tarball
	I1028 18:09:51.297854   53868 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:09:53.589332   53868 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.29144364s)
	I1028 18:09:53.589365   53868 crio.go:469] duration metric: took 2.291556274s to extract the tarball
	I1028 18:09:53.589377   53868 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:09:53.629908   53868 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:09:53.671241   53868 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1028 18:09:53.671266   53868 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:09:53.671325   53868 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:09:53.671365   53868 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 18:09:53.671384   53868 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 18:09:53.671412   53868 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 18:09:53.671420   53868 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:09:53.671360   53868 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 18:09:53.671487   53868 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 18:09:53.671513   53868 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:09:53.672966   53868 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 18:09:53.672983   53868 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:09:53.672989   53868 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 18:09:53.672973   53868 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 18:09:53.672997   53868 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 18:09:53.673018   53868 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:09:53.673094   53868 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 18:09:53.673326   53868 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:09:53.817307   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1028 18:09:53.819403   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 18:09:53.824710   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:09:53.866052   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 18:09:53.887258   53868 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1028 18:09:53.887299   53868 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 18:09:53.887353   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:53.896798   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1028 18:09:53.907080   53868 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1028 18:09:53.907113   53868 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:09:53.907154   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:53.920596   53868 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1028 18:09:53.920629   53868 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:09:53.920674   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:53.954901   53868 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1028 18:09:53.954935   53868 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1028 18:09:53.954975   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:53.955016   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1028 18:09:53.966384   53868 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1028 18:09:53.966416   53868 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 18:09:53.966422   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 18:09:53.966450   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:53.966454   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:09:53.966496   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 18:09:53.986706   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 18:09:53.986706   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1028 18:09:54.005994   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1028 18:09:54.081308   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 18:09:54.081446   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1028 18:09:54.081447   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:09:54.081521   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 18:09:54.141597   53868 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1028 18:09:54.141646   53868 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 18:09:54.141694   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:54.206916   53868 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1028 18:09:54.206965   53868 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 18:09:54.206973   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1028 18:09:54.207015   53868 ssh_runner.go:195] Run: which crictl
	I1028 18:09:54.236939   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 18:09:54.236979   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1028 18:09:54.237053   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:09:54.237111   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 18:09:54.237133   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 18:09:54.268056   53868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1028 18:09:54.268134   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1028 18:09:54.268165   53868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1028 18:09:54.323077   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1028 18:09:54.376321   53868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 18:09:54.376449   53868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 18:09:54.376505   53868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1028 18:09:54.376449   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 18:09:54.376585   53868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1028 18:09:54.376556   53868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1028 18:09:54.376662   53868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 18:09:54.393395   53868 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1028 18:09:54.393426   53868 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1028 18:09:54.393458   53868 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1028 18:09:54.393410   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1028 18:09:54.438155   53868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1028 18:09:54.438177   53868 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1028 18:09:54.438267   53868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1028 18:09:54.463248   53868 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1028 18:09:54.463279   53868 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1028 18:09:54.463342   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 18:09:55.895086   53868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:09:57.699326   53868 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.305847078s)
	I1028 18:09:57.699352   53868 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1028 18:09:57.699369   53868 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 18:09:57.699412   53868 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1028 18:09:57.699471   53868 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.305936017s)
	I1028 18:09:57.699493   53868 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.261202485s)
	I1028 18:09:57.699512   53868 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1028 18:09:57.699543   53868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1028 18:09:57.699572   53868 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (3.236204692s)
	I1028 18:09:57.699615   53868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1028 18:09:57.699615   53868 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.804503107s)
	I1028 18:09:57.699701   53868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1028 18:09:58.068462   53868 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 18:09:58.068529   53868 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1028 18:09:58.068582   53868 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1028 18:09:58.068583   53868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1028 18:09:58.068618   53868 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1028 18:09:58.068689   53868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1028 18:10:00.119137   53868 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.05042068s)
	I1028 18:10:00.119176   53868 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1028 18:10:00.119213   53868 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.050613705s)
	I1028 18:10:00.119227   53868 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1028 18:10:00.119249   53868 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 18:10:00.119291   53868 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1028 18:10:00.260674   53868 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1028 18:10:00.260717   53868 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1028 18:10:00.260759   53868 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1028 18:10:01.103281   53868 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1028 18:10:01.103328   53868 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1028 18:10:01.103376   53868 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1028 18:10:01.753943   53868 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1028 18:10:01.754004   53868 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1028 18:10:01.754058   53868 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1028 18:10:02.198194   53868 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1028 18:10:02.198243   53868 cache_images.go:123] Successfully loaded all cached images
	I1028 18:10:02.198248   53868 cache_images.go:92] duration metric: took 8.526963171s to LoadCachedImages
	I1028 18:10:02.198259   53868 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.24.4 crio true true} ...
	I1028 18:10:02.198347   53868 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-598338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-598338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:10:02.198416   53868 ssh_runner.go:195] Run: crio config
	I1028 18:10:02.241782   53868 cni.go:84] Creating CNI manager for ""
	I1028 18:10:02.241801   53868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:10:02.241821   53868 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:10:02.241838   53868 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-598338 NodeName:test-preload-598338 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:10:02.241967   53868 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-598338"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:10:02.242033   53868 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1028 18:10:02.251974   53868 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:10:02.252044   53868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:10:02.261380   53868 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1028 18:10:02.277526   53868 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:10:02.293178   53868 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1028 18:10:02.309635   53868 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I1028 18:10:02.313094   53868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:10:02.324617   53868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:10:02.429527   53868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:10:02.446323   53868 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338 for IP: 192.168.39.7
	I1028 18:10:02.446343   53868 certs.go:194] generating shared ca certs ...
	I1028 18:10:02.446361   53868 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:10:02.446503   53868 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:10:02.446541   53868 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:10:02.446551   53868 certs.go:256] generating profile certs ...
	I1028 18:10:02.446660   53868 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/client.key
	I1028 18:10:02.446730   53868 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/apiserver.key.eccf658b
	I1028 18:10:02.446769   53868 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/proxy-client.key
	I1028 18:10:02.446905   53868 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:10:02.446939   53868 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:10:02.446951   53868 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:10:02.446991   53868 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:10:02.447030   53868 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:10:02.447070   53868 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:10:02.447132   53868 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:10:02.447977   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:10:02.498944   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:10:02.536399   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:10:02.569403   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:10:02.600975   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 18:10:02.637959   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:10:02.669958   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:10:02.692370   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:10:02.714679   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:10:02.736725   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:10:02.758798   53868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:10:02.781261   53868 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:10:02.797349   53868 ssh_runner.go:195] Run: openssl version
	I1028 18:10:02.802946   53868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:10:02.813235   53868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:10:02.817574   53868 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:10:02.817621   53868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:10:02.823147   53868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:10:02.833313   53868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:10:02.843447   53868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:10:02.847659   53868 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:10:02.847696   53868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:10:02.852919   53868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:10:02.862860   53868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:10:02.872812   53868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:10:02.876961   53868 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:10:02.877002   53868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:10:02.882299   53868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:10:02.892234   53868 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:10:02.896636   53868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:10:02.902307   53868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:10:02.907589   53868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:10:02.913178   53868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:10:02.918387   53868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:10:02.923602   53868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:10:02.928931   53868 kubeadm.go:392] StartCluster: {Name:test-preload-598338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-598338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:10:02.929008   53868 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:10:02.929054   53868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:10:02.965514   53868 cri.go:89] found id: ""
	I1028 18:10:02.965579   53868 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:10:02.975398   53868 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:10:02.975421   53868 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:10:02.975478   53868 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:10:02.984651   53868 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:10:02.985074   53868 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-598338" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:10:02.985220   53868 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-598338" cluster setting kubeconfig missing "test-preload-598338" context setting]
	I1028 18:10:02.985515   53868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:10:02.986177   53868 kapi.go:59] client config for test-preload-598338: &rest.Config{Host:"https://192.168.39.7:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 18:10:02.986738   53868 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:10:02.995601   53868 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.7
	I1028 18:10:02.995626   53868 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:10:02.995637   53868 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:10:02.995687   53868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:10:03.029022   53868 cri.go:89] found id: ""
	I1028 18:10:03.029075   53868 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:10:03.044089   53868 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:10:03.053196   53868 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:10:03.053214   53868 kubeadm.go:157] found existing configuration files:
	
	I1028 18:10:03.053251   53868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:10:03.062291   53868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:10:03.062346   53868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:10:03.073517   53868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:10:03.092088   53868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:10:03.092138   53868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:10:03.101347   53868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:10:03.110082   53868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:10:03.110121   53868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:10:03.119086   53868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:10:03.127751   53868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:10:03.127786   53868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:10:03.136759   53868 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:10:03.145602   53868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:10:03.240137   53868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:10:03.852266   53868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:10:04.108584   53868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:10:04.191581   53868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:10:04.293610   53868 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:10:04.293722   53868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:10:04.793783   53868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:10:05.294606   53868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:10:05.311991   53868 api_server.go:72] duration metric: took 1.0183867s to wait for apiserver process to appear ...
	I1028 18:10:05.312018   53868 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:10:05.312054   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:05.312513   53868 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I1028 18:10:05.813098   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:05.813630   53868 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I1028 18:10:06.312284   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:08.873096   53868 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:10:08.873121   53868 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:10:08.873138   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:08.927657   53868 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:10:08.927683   53868 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:10:09.312124   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:09.319608   53868 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:10:09.319636   53868 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:10:09.812621   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:09.820266   53868 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:10:09.820299   53868 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:10:10.312831   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:10.321408   53868 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I1028 18:10:10.332335   53868 api_server.go:141] control plane version: v1.24.4
	I1028 18:10:10.332357   53868 api_server.go:131] duration metric: took 5.020334301s to wait for apiserver health ...
	I1028 18:10:10.332366   53868 cni.go:84] Creating CNI manager for ""
	I1028 18:10:10.332373   53868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:10:10.333881   53868 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:10:10.335261   53868 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:10:10.346862   53868 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:10:10.393964   53868 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:10:10.394061   53868 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 18:10:10.394089   53868 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 18:10:10.407218   53868 system_pods.go:59] 7 kube-system pods found
	I1028 18:10:10.407258   53868 system_pods.go:61] "coredns-6d4b75cb6d-pc6xj" [e8deca6b-9622-4cf3-96fb-485676362d9f] Running
	I1028 18:10:10.407266   53868 system_pods.go:61] "etcd-test-preload-598338" [2edea434-f735-4907-8984-30a59b991ec0] Running
	I1028 18:10:10.407271   53868 system_pods.go:61] "kube-apiserver-test-preload-598338" [c6efd4ce-587d-479c-a968-24f07734d5ee] Running
	I1028 18:10:10.407281   53868 system_pods.go:61] "kube-controller-manager-test-preload-598338" [5ebe4a6e-6140-47b6-9d86-bb412a7ef767] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:10:10.407292   53868 system_pods.go:61] "kube-proxy-pdzkg" [479a7625-bb01-4e38-ba78-6f60b799a428] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:10:10.407301   53868 system_pods.go:61] "kube-scheduler-test-preload-598338" [44073667-ade8-4bc7-a5c4-06e6ac31ee29] Running
	I1028 18:10:10.407308   53868 system_pods.go:61] "storage-provisioner" [501253f3-8ca1-4be7-9172-112d2d792fd2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:10:10.407318   53868 system_pods.go:74] duration metric: took 13.331054ms to wait for pod list to return data ...
	I1028 18:10:10.407330   53868 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:10:10.410735   53868 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:10:10.410767   53868 node_conditions.go:123] node cpu capacity is 2
	I1028 18:10:10.410781   53868 node_conditions.go:105] duration metric: took 3.442288ms to run NodePressure ...
	I1028 18:10:10.410800   53868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:10:10.643462   53868 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:10:10.647234   53868 kubeadm.go:739] kubelet initialised
	I1028 18:10:10.647253   53868 kubeadm.go:740] duration metric: took 3.760597ms waiting for restarted kubelet to initialise ...
	I1028 18:10:10.647262   53868 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:10:10.652653   53868 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-pc6xj" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:10.660113   53868 pod_ready.go:98] node "test-preload-598338" hosting pod "coredns-6d4b75cb6d-pc6xj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.660137   53868 pod_ready.go:82] duration metric: took 7.460792ms for pod "coredns-6d4b75cb6d-pc6xj" in "kube-system" namespace to be "Ready" ...
	E1028 18:10:10.660148   53868 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-598338" hosting pod "coredns-6d4b75cb6d-pc6xj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.660158   53868 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:10.663762   53868 pod_ready.go:98] node "test-preload-598338" hosting pod "etcd-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.663782   53868 pod_ready.go:82] duration metric: took 3.614442ms for pod "etcd-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	E1028 18:10:10.663789   53868 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-598338" hosting pod "etcd-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.663796   53868 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:10.667509   53868 pod_ready.go:98] node "test-preload-598338" hosting pod "kube-apiserver-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.667527   53868 pod_ready.go:82] duration metric: took 3.722101ms for pod "kube-apiserver-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	E1028 18:10:10.667534   53868 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-598338" hosting pod "kube-apiserver-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.667541   53868 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:10.797891   53868 pod_ready.go:98] node "test-preload-598338" hosting pod "kube-controller-manager-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.797915   53868 pod_ready.go:82] duration metric: took 130.367784ms for pod "kube-controller-manager-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	E1028 18:10:10.797924   53868 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-598338" hosting pod "kube-controller-manager-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:10.797931   53868 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pdzkg" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:11.198251   53868 pod_ready.go:98] node "test-preload-598338" hosting pod "kube-proxy-pdzkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:11.198277   53868 pod_ready.go:82] duration metric: took 400.338142ms for pod "kube-proxy-pdzkg" in "kube-system" namespace to be "Ready" ...
	E1028 18:10:11.198285   53868 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-598338" hosting pod "kube-proxy-pdzkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:11.198292   53868 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:11.597798   53868 pod_ready.go:98] node "test-preload-598338" hosting pod "kube-scheduler-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:11.597830   53868 pod_ready.go:82] duration metric: took 399.530205ms for pod "kube-scheduler-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	E1028 18:10:11.597842   53868 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-598338" hosting pod "kube-scheduler-test-preload-598338" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:11.597852   53868 pod_ready.go:39] duration metric: took 950.579593ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:10:11.597886   53868 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:10:11.609628   53868 ops.go:34] apiserver oom_adj: -16
	I1028 18:10:11.609645   53868 kubeadm.go:597] duration metric: took 8.634218192s to restartPrimaryControlPlane
	I1028 18:10:11.609654   53868 kubeadm.go:394] duration metric: took 8.680726645s to StartCluster
	I1028 18:10:11.609682   53868 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:10:11.609769   53868 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:10:11.610467   53868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:10:11.610682   53868 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:10:11.610733   53868 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:10:11.610828   53868 addons.go:69] Setting storage-provisioner=true in profile "test-preload-598338"
	I1028 18:10:11.610843   53868 addons.go:69] Setting default-storageclass=true in profile "test-preload-598338"
	I1028 18:10:11.610848   53868 addons.go:234] Setting addon storage-provisioner=true in "test-preload-598338"
	W1028 18:10:11.610859   53868 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:10:11.610862   53868 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-598338"
	I1028 18:10:11.610890   53868 host.go:66] Checking if "test-preload-598338" exists ...
	I1028 18:10:11.610923   53868 config.go:182] Loaded profile config "test-preload-598338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1028 18:10:11.611302   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:10:11.611302   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:10:11.611357   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:10:11.611440   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:10:11.612260   53868 out.go:177] * Verifying Kubernetes components...
	I1028 18:10:11.613524   53868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:10:11.625935   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1028 18:10:11.626442   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:10:11.626915   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:10:11.626936   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:10:11.627303   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:10:11.627799   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:10:11.627850   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:10:11.629438   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I1028 18:10:11.629757   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:10:11.630194   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:10:11.630210   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:10:11.630570   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:10:11.630774   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetState
	I1028 18:10:11.632824   53868 kapi.go:59] client config for test-preload-598338: &rest.Config{Host:"https://192.168.39.7:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/test-preload-598338/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 18:10:11.633102   53868 addons.go:234] Setting addon default-storageclass=true in "test-preload-598338"
	W1028 18:10:11.633117   53868 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:10:11.633145   53868 host.go:66] Checking if "test-preload-598338" exists ...
	I1028 18:10:11.633414   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:10:11.633450   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:10:11.646127   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37891
	I1028 18:10:11.646620   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:10:11.646731   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I1028 18:10:11.647146   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:10:11.647167   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:10:11.647193   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:10:11.647480   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:10:11.647623   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:10:11.647649   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:10:11.647628   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetState
	I1028 18:10:11.647960   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:10:11.648523   53868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 18:10:11.648564   53868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:10:11.649377   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:10:11.651235   53868 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:10:11.652495   53868 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:10:11.652514   53868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:10:11.652531   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:10:11.655407   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:10:11.655753   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:10:11.655775   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:10:11.655924   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:10:11.656053   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:10:11.656138   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:10:11.656211   53868 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa Username:docker}
	I1028 18:10:11.691363   53868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I1028 18:10:11.691735   53868 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:10:11.692173   53868 main.go:141] libmachine: Using API Version  1
	I1028 18:10:11.692198   53868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:10:11.692523   53868 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:10:11.692665   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetState
	I1028 18:10:11.693968   53868 main.go:141] libmachine: (test-preload-598338) Calling .DriverName
	I1028 18:10:11.694186   53868 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:10:11.694201   53868 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:10:11.694216   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHHostname
	I1028 18:10:11.696730   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:10:11.697077   53868 main.go:141] libmachine: (test-preload-598338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:f5:eb", ip: ""} in network mk-test-preload-598338: {Iface:virbr1 ExpiryTime:2024-10-28 19:09:40 +0000 UTC Type:0 Mac:52:54:00:99:f5:eb Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:test-preload-598338 Clientid:01:52:54:00:99:f5:eb}
	I1028 18:10:11.697103   53868 main.go:141] libmachine: (test-preload-598338) DBG | domain test-preload-598338 has defined IP address 192.168.39.7 and MAC address 52:54:00:99:f5:eb in network mk-test-preload-598338
	I1028 18:10:11.697302   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHPort
	I1028 18:10:11.697469   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHKeyPath
	I1028 18:10:11.697642   53868 main.go:141] libmachine: (test-preload-598338) Calling .GetSSHUsername
	I1028 18:10:11.697783   53868 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/test-preload-598338/id_rsa Username:docker}
	I1028 18:10:11.772381   53868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:10:11.793460   53868 node_ready.go:35] waiting up to 6m0s for node "test-preload-598338" to be "Ready" ...
	I1028 18:10:11.853947   53868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:10:11.910853   53868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:10:12.729940   53868 main.go:141] libmachine: Making call to close driver server
	I1028 18:10:12.729992   53868 main.go:141] libmachine: (test-preload-598338) Calling .Close
	I1028 18:10:12.730309   53868 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:10:12.730328   53868 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:10:12.730369   53868 main.go:141] libmachine: (test-preload-598338) DBG | Closing plugin on server side
	I1028 18:10:12.730446   53868 main.go:141] libmachine: Making call to close driver server
	I1028 18:10:12.730467   53868 main.go:141] libmachine: (test-preload-598338) Calling .Close
	I1028 18:10:12.730692   53868 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:10:12.730707   53868 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:10:12.730714   53868 main.go:141] libmachine: (test-preload-598338) DBG | Closing plugin on server side
	I1028 18:10:12.736411   53868 main.go:141] libmachine: Making call to close driver server
	I1028 18:10:12.736428   53868 main.go:141] libmachine: (test-preload-598338) Calling .Close
	I1028 18:10:12.736637   53868 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:10:12.736653   53868 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:10:12.736670   53868 main.go:141] libmachine: (test-preload-598338) DBG | Closing plugin on server side
	I1028 18:10:12.761069   53868 main.go:141] libmachine: Making call to close driver server
	I1028 18:10:12.761086   53868 main.go:141] libmachine: (test-preload-598338) Calling .Close
	I1028 18:10:12.761310   53868 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:10:12.761324   53868 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:10:12.761342   53868 main.go:141] libmachine: (test-preload-598338) DBG | Closing plugin on server side
	I1028 18:10:12.761376   53868 main.go:141] libmachine: Making call to close driver server
	I1028 18:10:12.761395   53868 main.go:141] libmachine: (test-preload-598338) Calling .Close
	I1028 18:10:12.761594   53868 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:10:12.761606   53868 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:10:12.763492   53868 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 18:10:12.764705   53868 addons.go:510] duration metric: took 1.15397947s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 18:10:13.796609   53868 node_ready.go:53] node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:15.796834   53868 node_ready.go:53] node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:17.799404   53868 node_ready.go:53] node "test-preload-598338" has status "Ready":"False"
	I1028 18:10:19.796697   53868 node_ready.go:49] node "test-preload-598338" has status "Ready":"True"
	I1028 18:10:19.796725   53868 node_ready.go:38] duration metric: took 8.003230991s for node "test-preload-598338" to be "Ready" ...
	I1028 18:10:19.796734   53868 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:10:19.801456   53868 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-pc6xj" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.809849   53868 pod_ready.go:93] pod "coredns-6d4b75cb6d-pc6xj" in "kube-system" namespace has status "Ready":"True"
	I1028 18:10:19.809872   53868 pod_ready.go:82] duration metric: took 8.39002ms for pod "coredns-6d4b75cb6d-pc6xj" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.809881   53868 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.818392   53868 pod_ready.go:93] pod "etcd-test-preload-598338" in "kube-system" namespace has status "Ready":"True"
	I1028 18:10:19.818413   53868 pod_ready.go:82] duration metric: took 8.526751ms for pod "etcd-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.818423   53868 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.823880   53868 pod_ready.go:93] pod "kube-apiserver-test-preload-598338" in "kube-system" namespace has status "Ready":"True"
	I1028 18:10:19.823903   53868 pod_ready.go:82] duration metric: took 5.474076ms for pod "kube-apiserver-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.823912   53868 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.828354   53868 pod_ready.go:93] pod "kube-controller-manager-test-preload-598338" in "kube-system" namespace has status "Ready":"True"
	I1028 18:10:19.828373   53868 pod_ready.go:82] duration metric: took 4.455518ms for pod "kube-controller-manager-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:19.828381   53868 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pdzkg" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:20.196787   53868 pod_ready.go:93] pod "kube-proxy-pdzkg" in "kube-system" namespace has status "Ready":"True"
	I1028 18:10:20.196811   53868 pod_ready.go:82] duration metric: took 368.423295ms for pod "kube-proxy-pdzkg" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:20.196824   53868 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:20.596753   53868 pod_ready.go:93] pod "kube-scheduler-test-preload-598338" in "kube-system" namespace has status "Ready":"True"
	I1028 18:10:20.596775   53868 pod_ready.go:82] duration metric: took 399.943929ms for pod "kube-scheduler-test-preload-598338" in "kube-system" namespace to be "Ready" ...
	I1028 18:10:20.596785   53868 pod_ready.go:39] duration metric: took 800.038401ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:10:20.596801   53868 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:10:20.596880   53868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:10:20.611773   53868 api_server.go:72] duration metric: took 9.001062599s to wait for apiserver process to appear ...
	I1028 18:10:20.611803   53868 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:10:20.611822   53868 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1028 18:10:20.617055   53868 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I1028 18:10:20.617817   53868 api_server.go:141] control plane version: v1.24.4
	I1028 18:10:20.617832   53868 api_server.go:131] duration metric: took 6.023203ms to wait for apiserver health ...
	I1028 18:10:20.617839   53868 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:10:20.799663   53868 system_pods.go:59] 7 kube-system pods found
	I1028 18:10:20.799692   53868 system_pods.go:61] "coredns-6d4b75cb6d-pc6xj" [e8deca6b-9622-4cf3-96fb-485676362d9f] Running
	I1028 18:10:20.799697   53868 system_pods.go:61] "etcd-test-preload-598338" [2edea434-f735-4907-8984-30a59b991ec0] Running
	I1028 18:10:20.799700   53868 system_pods.go:61] "kube-apiserver-test-preload-598338" [c6efd4ce-587d-479c-a968-24f07734d5ee] Running
	I1028 18:10:20.799705   53868 system_pods.go:61] "kube-controller-manager-test-preload-598338" [5ebe4a6e-6140-47b6-9d86-bb412a7ef767] Running
	I1028 18:10:20.799707   53868 system_pods.go:61] "kube-proxy-pdzkg" [479a7625-bb01-4e38-ba78-6f60b799a428] Running
	I1028 18:10:20.799711   53868 system_pods.go:61] "kube-scheduler-test-preload-598338" [44073667-ade8-4bc7-a5c4-06e6ac31ee29] Running
	I1028 18:10:20.799719   53868 system_pods.go:61] "storage-provisioner" [501253f3-8ca1-4be7-9172-112d2d792fd2] Running
	I1028 18:10:20.799724   53868 system_pods.go:74] duration metric: took 181.881119ms to wait for pod list to return data ...
	I1028 18:10:20.799730   53868 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:10:20.997495   53868 default_sa.go:45] found service account: "default"
	I1028 18:10:20.997520   53868 default_sa.go:55] duration metric: took 197.784667ms for default service account to be created ...
	I1028 18:10:20.997528   53868 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:10:21.200202   53868 system_pods.go:86] 7 kube-system pods found
	I1028 18:10:21.200235   53868 system_pods.go:89] "coredns-6d4b75cb6d-pc6xj" [e8deca6b-9622-4cf3-96fb-485676362d9f] Running
	I1028 18:10:21.200243   53868 system_pods.go:89] "etcd-test-preload-598338" [2edea434-f735-4907-8984-30a59b991ec0] Running
	I1028 18:10:21.200252   53868 system_pods.go:89] "kube-apiserver-test-preload-598338" [c6efd4ce-587d-479c-a968-24f07734d5ee] Running
	I1028 18:10:21.200258   53868 system_pods.go:89] "kube-controller-manager-test-preload-598338" [5ebe4a6e-6140-47b6-9d86-bb412a7ef767] Running
	I1028 18:10:21.200263   53868 system_pods.go:89] "kube-proxy-pdzkg" [479a7625-bb01-4e38-ba78-6f60b799a428] Running
	I1028 18:10:21.200268   53868 system_pods.go:89] "kube-scheduler-test-preload-598338" [44073667-ade8-4bc7-a5c4-06e6ac31ee29] Running
	I1028 18:10:21.200273   53868 system_pods.go:89] "storage-provisioner" [501253f3-8ca1-4be7-9172-112d2d792fd2] Running
	I1028 18:10:21.200281   53868 system_pods.go:126] duration metric: took 202.747489ms to wait for k8s-apps to be running ...
	I1028 18:10:21.200290   53868 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:10:21.200357   53868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:10:21.215772   53868 system_svc.go:56] duration metric: took 15.475941ms WaitForService to wait for kubelet
	I1028 18:10:21.215798   53868 kubeadm.go:582] duration metric: took 9.605091072s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:10:21.215836   53868 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:10:21.396611   53868 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:10:21.396634   53868 node_conditions.go:123] node cpu capacity is 2
	I1028 18:10:21.396643   53868 node_conditions.go:105] duration metric: took 180.801451ms to run NodePressure ...
	I1028 18:10:21.396653   53868 start.go:241] waiting for startup goroutines ...
	I1028 18:10:21.396659   53868 start.go:246] waiting for cluster config update ...
	I1028 18:10:21.396668   53868 start.go:255] writing updated cluster config ...
	I1028 18:10:21.396897   53868 ssh_runner.go:195] Run: rm -f paused
	I1028 18:10:21.453873   53868 start.go:600] kubectl: 1.31.2, cluster: 1.24.4 (minor skew: 7)
	I1028 18:10:21.455710   53868 out.go:201] 
	W1028 18:10:21.457177   53868 out.go:270] ! /usr/local/bin/kubectl is version 1.31.2, which may have incompatibilities with Kubernetes 1.24.4.
	I1028 18:10:21.458488   53868 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1028 18:10:21.459862   53868 out.go:177] * Done! kubectl is now configured to use "test-preload-598338" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.379840608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e6d14dd-184c-450e-a7b5-068ed773ecb3 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.383303309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5e560a8-540f-4b9d-9716-1e0209d621e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.383770128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139022383751189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5e560a8-540f-4b9d-9716-1e0209d621e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.384420371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=545539ec-ddc1-457f-bf6c-517073f178bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.384486769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=545539ec-ddc1-457f-bf6c-517073f178bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.384659524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce199f835f71ab3461494b952cf75e390d4f5bb78a65c3eef364e212f80abaa,PodSandboxId:6017c9cdd9f6008d35a33d18baf859dd2c6cd4652516b53f0e47c8a065ad9b92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730139017351103981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pc6xj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deca6b-9622-4cf3-96fb-485676362d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 1962e618,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4b411afeaafc642d8bc7f5708a7a13644b0798eb595c3163a988bd99e239f,PodSandboxId:a7c87f8503714db5715ffada4c8b657de0d2c55957275545970b43c1a4cb770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730139010216470449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdzkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 479a7625-bb01-4e38-ba78-6f60b799a428,},Annotations:map[string]string{io.kubernetes.container.hash: 5c37e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:754bba2b95617d38e2ee7e39c6fdb4865861619c2f12e5f7d86fc8ba07706e5b,PodSandboxId:af07a6f70087539e1051c231909b712216c33dd617f6aaaeacfd6c1ee74de0e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730139009915627772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
1253f3-8ca1-4be7-9172-112d2d792fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 49de7fd8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e0926f7da7494488b11ee409aae4b9a64a5e956788a2c995bda4bcd60ccd15,PodSandboxId:2c06d33fec5f3bbc175974232532f1023f4fa602b04f80ec9d98214872bb56cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730139005025899824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64af411a2
c7ad3bb8a04dab1cfaba95c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179bb8030aebc8a3ffacff685797c2f95360c53126a7236ff1b5bba42e1b91f2,PodSandboxId:e9d805d16236552f8ebf4c84401e8f4b1f4df5df1444786cd25e78b477292e0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730139005000747364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc6390859368db15d43603cf43c918a1,},Annotations:map
[string]string{io.kubernetes.container.hash: 416540ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27bd4386eb7faffcaf7eb701474958909aaf955f3e6c424c3d0cf8cf165e47ea,PodSandboxId:2911478a45a448a6a9435132451792d8e63512350fc34a7ee2964fb9e0dc662e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730139004992211274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293abe9ce13b6cc57eb558d62c92611b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1daa58fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a19faec9786ac64ba7a21590716d0096226ed3c52d22489f0d27ca71f5496e0,PodSandboxId:4c4f215387dbe2bd2d8d15046bbc7061501765aa9c0fc37b058f8b4ebc46493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730139004904303021,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b122738e92e925b4d3c20c6edc4d5ce,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=545539ec-ddc1-457f-bf6c-517073f178bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.402212043Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a90f863e-b0e5-4259-9b86-24681391fca5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.402391363Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6017c9cdd9f6008d35a33d18baf859dd2c6cd4652516b53f0e47c8a065ad9b92,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-pc6xj,Uid:e8deca6b-9622-4cf3-96fb-485676362d9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730139017133289735,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-pc6xj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deca6b-9622-4cf3-96fb-485676362d9f,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:10:09.206752688Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7c87f8503714db5715ffada4c8b657de0d2c55957275545970b43c1a4cb770e,Metadata:&PodSandboxMetadata{Name:kube-proxy-pdzkg,Uid:479a7625-bb01-4e38-ba78-6f60b799a428,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1730139010115690099,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pdzkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479a7625-bb01-4e38-ba78-6f60b799a428,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:10:09.206749743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af07a6f70087539e1051c231909b712216c33dd617f6aaaeacfd6c1ee74de0e9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:501253f3-8ca1-4be7-9172-112d2d792fd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730139009815090612,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 501253f3-8ca1-4be7-9172-112d
2d792fd2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T18:10:09.206751752Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e9d805d16236552f8ebf4c84401e8f4b1f4df5df1444786cd25e78b477292e0c,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-598338,Uid:cc6390859368db15d
43603cf43c918a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730139004776792949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc6390859368db15d43603cf43c918a1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.7:2379,kubernetes.io/config.hash: cc6390859368db15d43603cf43c918a1,kubernetes.io/config.seen: 2024-10-28T18:10:04.277385042Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c4f215387dbe2bd2d8d15046bbc7061501765aa9c0fc37b058f8b4ebc46493e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-598338,Uid:2b122738e92e925b4d3c20c6edc4d5ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730139004767611332,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-c
ontroller-manager-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b122738e92e925b4d3c20c6edc4d5ce,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2b122738e92e925b4d3c20c6edc4d5ce,kubernetes.io/config.seen: 2024-10-28T18:10:04.202463723Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c06d33fec5f3bbc175974232532f1023f4fa602b04f80ec9d98214872bb56cf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-598338,Uid:64af411a2c7ad3bb8a04dab1cfaba95c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730139004765752344,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64af411a2c7ad3bb8a04dab1cfaba95c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 64af411a2c7ad3bb8a04dab1cfaba95c,kubernetes.io/config.seen: 2024-10-28T18:1
0:04.202469526Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2911478a45a448a6a9435132451792d8e63512350fc34a7ee2964fb9e0dc662e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-598338,Uid:293abe9ce13b6cc57eb558d62c92611b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730139004753647445,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293abe9ce13b6cc57eb558d62c92611b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.7:8443,kubernetes.io/config.hash: 293abe9ce13b6cc57eb558d62c92611b,kubernetes.io/config.seen: 2024-10-28T18:10:04.202436604Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a90f863e-b0e5-4259-9b86-24681391fca5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.402845671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1377c9a4-4225-4f08-9748-3d0823bd96a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.402907295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1377c9a4-4225-4f08-9748-3d0823bd96a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.403128362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce199f835f71ab3461494b952cf75e390d4f5bb78a65c3eef364e212f80abaa,PodSandboxId:6017c9cdd9f6008d35a33d18baf859dd2c6cd4652516b53f0e47c8a065ad9b92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730139017351103981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pc6xj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deca6b-9622-4cf3-96fb-485676362d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 1962e618,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4b411afeaafc642d8bc7f5708a7a13644b0798eb595c3163a988bd99e239f,PodSandboxId:a7c87f8503714db5715ffada4c8b657de0d2c55957275545970b43c1a4cb770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730139010216470449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdzkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 479a7625-bb01-4e38-ba78-6f60b799a428,},Annotations:map[string]string{io.kubernetes.container.hash: 5c37e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:754bba2b95617d38e2ee7e39c6fdb4865861619c2f12e5f7d86fc8ba07706e5b,PodSandboxId:af07a6f70087539e1051c231909b712216c33dd617f6aaaeacfd6c1ee74de0e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730139009915627772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
1253f3-8ca1-4be7-9172-112d2d792fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 49de7fd8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e0926f7da7494488b11ee409aae4b9a64a5e956788a2c995bda4bcd60ccd15,PodSandboxId:2c06d33fec5f3bbc175974232532f1023f4fa602b04f80ec9d98214872bb56cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730139005025899824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64af411a2
c7ad3bb8a04dab1cfaba95c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179bb8030aebc8a3ffacff685797c2f95360c53126a7236ff1b5bba42e1b91f2,PodSandboxId:e9d805d16236552f8ebf4c84401e8f4b1f4df5df1444786cd25e78b477292e0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730139005000747364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc6390859368db15d43603cf43c918a1,},Annotations:map
[string]string{io.kubernetes.container.hash: 416540ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27bd4386eb7faffcaf7eb701474958909aaf955f3e6c424c3d0cf8cf165e47ea,PodSandboxId:2911478a45a448a6a9435132451792d8e63512350fc34a7ee2964fb9e0dc662e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730139004992211274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293abe9ce13b6cc57eb558d62c92611b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1daa58fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a19faec9786ac64ba7a21590716d0096226ed3c52d22489f0d27ca71f5496e0,PodSandboxId:4c4f215387dbe2bd2d8d15046bbc7061501765aa9c0fc37b058f8b4ebc46493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730139004904303021,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b122738e92e925b4d3c20c6edc4d5ce,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1377c9a4-4225-4f08-9748-3d0823bd96a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.421730402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f45f1d6-8fa8-4f2d-954b-e90f8d1efbe1 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.421815147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f45f1d6-8fa8-4f2d-954b-e90f8d1efbe1 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.422632018Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48ee4453-5018-48d3-8e64-352c92f94afa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.423124144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139022423106411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48ee4453-5018-48d3-8e64-352c92f94afa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.423512981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdd33be4-5cbe-4c36-a579-b8bc63a10ea5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.423579493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdd33be4-5cbe-4c36-a579-b8bc63a10ea5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.423756836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce199f835f71ab3461494b952cf75e390d4f5bb78a65c3eef364e212f80abaa,PodSandboxId:6017c9cdd9f6008d35a33d18baf859dd2c6cd4652516b53f0e47c8a065ad9b92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730139017351103981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pc6xj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deca6b-9622-4cf3-96fb-485676362d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 1962e618,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4b411afeaafc642d8bc7f5708a7a13644b0798eb595c3163a988bd99e239f,PodSandboxId:a7c87f8503714db5715ffada4c8b657de0d2c55957275545970b43c1a4cb770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730139010216470449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdzkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 479a7625-bb01-4e38-ba78-6f60b799a428,},Annotations:map[string]string{io.kubernetes.container.hash: 5c37e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:754bba2b95617d38e2ee7e39c6fdb4865861619c2f12e5f7d86fc8ba07706e5b,PodSandboxId:af07a6f70087539e1051c231909b712216c33dd617f6aaaeacfd6c1ee74de0e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730139009915627772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
1253f3-8ca1-4be7-9172-112d2d792fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 49de7fd8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e0926f7da7494488b11ee409aae4b9a64a5e956788a2c995bda4bcd60ccd15,PodSandboxId:2c06d33fec5f3bbc175974232532f1023f4fa602b04f80ec9d98214872bb56cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730139005025899824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64af411a2
c7ad3bb8a04dab1cfaba95c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179bb8030aebc8a3ffacff685797c2f95360c53126a7236ff1b5bba42e1b91f2,PodSandboxId:e9d805d16236552f8ebf4c84401e8f4b1f4df5df1444786cd25e78b477292e0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730139005000747364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc6390859368db15d43603cf43c918a1,},Annotations:map
[string]string{io.kubernetes.container.hash: 416540ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27bd4386eb7faffcaf7eb701474958909aaf955f3e6c424c3d0cf8cf165e47ea,PodSandboxId:2911478a45a448a6a9435132451792d8e63512350fc34a7ee2964fb9e0dc662e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730139004992211274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293abe9ce13b6cc57eb558d62c92611b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1daa58fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a19faec9786ac64ba7a21590716d0096226ed3c52d22489f0d27ca71f5496e0,PodSandboxId:4c4f215387dbe2bd2d8d15046bbc7061501765aa9c0fc37b058f8b4ebc46493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730139004904303021,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b122738e92e925b4d3c20c6edc4d5ce,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdd33be4-5cbe-4c36-a579-b8bc63a10ea5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.454885456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b1281e7-2e7d-4af3-8f4b-82fe23980113 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.455021497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b1281e7-2e7d-4af3-8f4b-82fe23980113 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.456419964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6eb51c0-6195-4769-8730-e4619d574287 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.456849847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139022456830756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6eb51c0-6195-4769-8730-e4619d574287 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.457382013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cfc3666-4cff-4a72-97f7-ac8f6575b572 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.457447229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cfc3666-4cff-4a72-97f7-ac8f6575b572 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:10:22 test-preload-598338 crio[656]: time="2024-10-28 18:10:22.457601546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce199f835f71ab3461494b952cf75e390d4f5bb78a65c3eef364e212f80abaa,PodSandboxId:6017c9cdd9f6008d35a33d18baf859dd2c6cd4652516b53f0e47c8a065ad9b92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730139017351103981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pc6xj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deca6b-9622-4cf3-96fb-485676362d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 1962e618,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4b411afeaafc642d8bc7f5708a7a13644b0798eb595c3163a988bd99e239f,PodSandboxId:a7c87f8503714db5715ffada4c8b657de0d2c55957275545970b43c1a4cb770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730139010216470449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdzkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 479a7625-bb01-4e38-ba78-6f60b799a428,},Annotations:map[string]string{io.kubernetes.container.hash: 5c37e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:754bba2b95617d38e2ee7e39c6fdb4865861619c2f12e5f7d86fc8ba07706e5b,PodSandboxId:af07a6f70087539e1051c231909b712216c33dd617f6aaaeacfd6c1ee74de0e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730139009915627772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
1253f3-8ca1-4be7-9172-112d2d792fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 49de7fd8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e0926f7da7494488b11ee409aae4b9a64a5e956788a2c995bda4bcd60ccd15,PodSandboxId:2c06d33fec5f3bbc175974232532f1023f4fa602b04f80ec9d98214872bb56cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730139005025899824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64af411a2
c7ad3bb8a04dab1cfaba95c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179bb8030aebc8a3ffacff685797c2f95360c53126a7236ff1b5bba42e1b91f2,PodSandboxId:e9d805d16236552f8ebf4c84401e8f4b1f4df5df1444786cd25e78b477292e0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730139005000747364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc6390859368db15d43603cf43c918a1,},Annotations:map
[string]string{io.kubernetes.container.hash: 416540ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27bd4386eb7faffcaf7eb701474958909aaf955f3e6c424c3d0cf8cf165e47ea,PodSandboxId:2911478a45a448a6a9435132451792d8e63512350fc34a7ee2964fb9e0dc662e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730139004992211274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293abe9ce13b6cc57eb558d62c92611b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1daa58fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a19faec9786ac64ba7a21590716d0096226ed3c52d22489f0d27ca71f5496e0,PodSandboxId:4c4f215387dbe2bd2d8d15046bbc7061501765aa9c0fc37b058f8b4ebc46493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730139004904303021,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-598338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b122738e92e925b4d3c20c6edc4d5ce,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cfc3666-4cff-4a72-97f7-ac8f6575b572 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ce199f835f71       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   6017c9cdd9f60       coredns-6d4b75cb6d-pc6xj
	dae4b411afeaa       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   a7c87f8503714       kube-proxy-pdzkg
	754bba2b95617       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       2                   af07a6f700875       storage-provisioner
	78e0926f7da74       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   2c06d33fec5f3       kube-scheduler-test-preload-598338
	179bb8030aebc       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   e9d805d162365       etcd-test-preload-598338
	27bd4386eb7fa       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   2911478a45a44       kube-apiserver-test-preload-598338
	8a19faec9786a       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   4c4f215387dbe       kube-controller-manager-test-preload-598338
	
	
	==> coredns [7ce199f835f71ab3461494b952cf75e390d4f5bb78a65c3eef364e212f80abaa] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:33682 - 17627 "HINFO IN 5796100925422902958.9031571591512027752. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011396315s
	
	
	==> describe nodes <==
	Name:               test-preload-598338
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-598338
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=test-preload-598338
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_07_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:07:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-598338
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:10:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:10:19 +0000   Mon, 28 Oct 2024 18:07:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:10:19 +0000   Mon, 28 Oct 2024 18:07:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:10:19 +0000   Mon, 28 Oct 2024 18:07:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:10:19 +0000   Mon, 28 Oct 2024 18:10:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    test-preload-598338
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 344dbad9802e42aaa957e08bcf4f8b1b
	  System UUID:                344dbad9-802e-42aa-a957-e08bcf4f8b1b
	  Boot ID:                    bc482763-a14c-4032-aca8-310b027faea1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pc6xj                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m13s
	  kube-system                 etcd-test-preload-598338                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m26s
	  kube-system                 kube-apiserver-test-preload-598338             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-test-preload-598338    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-pdzkg                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-test-preload-598338             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12s                    kube-proxy       
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m34s (x5 over 2m34s)  kubelet          Node test-preload-598338 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s (x4 over 2m34s)  kubelet          Node test-preload-598338 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s (x4 over 2m34s)  kubelet          Node test-preload-598338 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m27s                  kubelet          Node test-preload-598338 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s                  kubelet          Node test-preload-598338 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s                  kubelet          Node test-preload-598338 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m16s                  kubelet          Node test-preload-598338 status is now: NodeReady
	  Normal  RegisteredNode           2m14s                  node-controller  Node test-preload-598338 event: Registered Node test-preload-598338 in Controller
	  Normal  Starting                 18s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)      kubelet          Node test-preload-598338 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)      kubelet          Node test-preload-598338 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)      kubelet          Node test-preload-598338 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                     node-controller  Node test-preload-598338 event: Registered Node test-preload-598338 in Controller
	
	
	==> dmesg <==
	[Oct28 18:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050161] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039145] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.855959] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.504701] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.565035] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.129762] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.054467] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.046367] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.191195] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.136912] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.272685] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[Oct28 18:10] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.056938] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.601629] systemd-fstab-generator[1109]: Ignoring "noauto" option for root device
	[  +5.857083] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.783545] systemd-fstab-generator[1733]: Ignoring "noauto" option for root device
	[  +5.499554] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [179bb8030aebc8a3ffacff685797c2f95360c53126a7236ff1b5bba42e1b91f2] <==
	{"level":"info","ts":"2024-10-28T18:10:05.285Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"bb39151d8411994b","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-28T18:10:05.285Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-28T18:10:05.285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b switched to configuration voters=(13490837375279012171)"}
	{"level":"info","ts":"2024-10-28T18:10:05.285Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3202df3d6e5aadcb","local-member-id":"bb39151d8411994b","added-peer-id":"bb39151d8411994b","added-peer-peer-urls":["https://192.168.39.7:2380"]}
	{"level":"info","ts":"2024-10-28T18:10:05.285Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3202df3d6e5aadcb","local-member-id":"bb39151d8411994b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:10:05.285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:10:05.290Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T18:10:05.290Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-10-28T18:10:05.290Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-10-28T18:10:05.291Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"bb39151d8411994b","initial-advertise-peer-urls":["https://192.168.39.7:2380"],"listen-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.7:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T18:10:05.291Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T18:10:06.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-28T18:10:06.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T18:10:06.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgPreVoteResp from bb39151d8411994b at term 2"}
	{"level":"info","ts":"2024-10-28T18:10:06.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T18:10:06.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgVoteResp from bb39151d8411994b at term 3"}
	{"level":"info","ts":"2024-10-28T18:10:06.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became leader at term 3"}
	{"level":"info","ts":"2024-10-28T18:10:06.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 3"}
	{"level":"info","ts":"2024-10-28T18:10:06.472Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"bb39151d8411994b","local-member-attributes":"{Name:test-preload-598338 ClientURLs:[https://192.168.39.7:2379]}","request-path":"/0/members/bb39151d8411994b/attributes","cluster-id":"3202df3d6e5aadcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T18:10:06.472Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:10:06.472Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:10:06.472Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T18:10:06.472Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:10:06.473Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T18:10:06.473Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.7:2379"}
	
	
	==> kernel <==
	 18:10:22 up 0 min,  0 users,  load average: 1.17, 0.32, 0.11
	Linux test-preload-598338 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [27bd4386eb7faffcaf7eb701474958909aaf955f3e6c424c3d0cf8cf165e47ea] <==
	I1028 18:10:08.823487       1 establishing_controller.go:76] Starting EstablishingController
	I1028 18:10:08.823531       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1028 18:10:08.823565       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1028 18:10:08.823602       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1028 18:10:08.823642       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1028 18:10:08.823664       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E1028 18:10:08.915830       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1028 18:10:08.923744       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1028 18:10:08.972040       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1028 18:10:08.987055       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 18:10:08.995254       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1028 18:10:08.995776       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1028 18:10:08.995788       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1028 18:10:09.004827       1 cache.go:39] Caches are synced for autoregister controller
	I1028 18:10:09.004921       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 18:10:09.503016       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1028 18:10:09.805620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 18:10:10.532741       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1028 18:10:10.549863       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1028 18:10:10.580459       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1028 18:10:10.594348       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 18:10:10.600386       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1028 18:10:10.607915       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 18:10:21.355725       1 controller.go:611] quota admission added evaluator for: endpoints
	I1028 18:10:21.404848       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a19faec9786ac64ba7a21590716d0096226ed3c52d22489f0d27ca71f5496e0] <==
	I1028 18:10:21.261862       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1028 18:10:21.262656       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1028 18:10:21.263681       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1028 18:10:21.264039       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1028 18:10:21.264902       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1028 18:10:21.265336       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1028 18:10:21.265239       1 shared_informer.go:262] Caches are synced for GC
	I1028 18:10:21.266337       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1028 18:10:21.268465       1 shared_informer.go:262] Caches are synced for PVC protection
	I1028 18:10:21.290774       1 shared_informer.go:262] Caches are synced for namespace
	I1028 18:10:21.293149       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1028 18:10:21.306745       1 shared_informer.go:262] Caches are synced for deployment
	I1028 18:10:21.316499       1 shared_informer.go:262] Caches are synced for node
	I1028 18:10:21.316718       1 range_allocator.go:173] Starting range CIDR allocator
	I1028 18:10:21.316805       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1028 18:10:21.316847       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1028 18:10:21.319770       1 shared_informer.go:262] Caches are synced for daemon sets
	I1028 18:10:21.322137       1 shared_informer.go:262] Caches are synced for job
	I1028 18:10:21.329479       1 shared_informer.go:262] Caches are synced for HPA
	I1028 18:10:21.444279       1 shared_informer.go:262] Caches are synced for attach detach
	I1028 18:10:21.513834       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 18:10:21.520153       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 18:10:21.949437       1 shared_informer.go:262] Caches are synced for garbage collector
	I1028 18:10:21.949459       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1028 18:10:21.956901       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [dae4b411afeaafc642d8bc7f5708a7a13644b0798eb595c3163a988bd99e239f] <==
	I1028 18:10:10.508613       1 node.go:163] Successfully retrieved node IP: 192.168.39.7
	I1028 18:10:10.509155       1 server_others.go:138] "Detected node IP" address="192.168.39.7"
	I1028 18:10:10.509781       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1028 18:10:10.569121       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1028 18:10:10.569223       1 server_others.go:206] "Using iptables Proxier"
	I1028 18:10:10.569757       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1028 18:10:10.572091       1 server.go:661] "Version info" version="v1.24.4"
	I1028 18:10:10.572140       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:10:10.577235       1 config.go:317] "Starting service config controller"
	I1028 18:10:10.577893       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1028 18:10:10.578458       1 config.go:226] "Starting endpoint slice config controller"
	I1028 18:10:10.579643       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1028 18:10:10.579736       1 config.go:444] "Starting node config controller"
	I1028 18:10:10.579758       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1028 18:10:10.678923       1 shared_informer.go:262] Caches are synced for service config
	I1028 18:10:10.680208       1 shared_informer.go:262] Caches are synced for node config
	I1028 18:10:10.682524       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [78e0926f7da7494488b11ee409aae4b9a64a5e956788a2c995bda4bcd60ccd15] <==
	I1028 18:10:05.629048       1 serving.go:348] Generated self-signed cert in-memory
	W1028 18:10:08.838993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 18:10:08.839059       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 18:10:08.839074       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 18:10:08.839084       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 18:10:08.921626       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1028 18:10:08.921645       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:10:08.929249       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1028 18:10:08.930219       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 18:10:08.930275       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 18:10:08.930310       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1028 18:10:09.030327       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.203106    1116 apiserver.go:52] "Watching apiserver"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.206923    1116 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.207214    1116 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.207375    1116 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: E1028 18:10:09.209266    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-pc6xj" podUID=e8deca6b-9622-4cf3-96fb-485676362d9f
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277503    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/501253f3-8ca1-4be7-9172-112d2d792fd2-tmp\") pod \"storage-provisioner\" (UID: \"501253f3-8ca1-4be7-9172-112d2d792fd2\") " pod="kube-system/storage-provisioner"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277566    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/479a7625-bb01-4e38-ba78-6f60b799a428-lib-modules\") pod \"kube-proxy-pdzkg\" (UID: \"479a7625-bb01-4e38-ba78-6f60b799a428\") " pod="kube-system/kube-proxy-pdzkg"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277597    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwb2\" (UniqueName: \"kubernetes.io/projected/e8deca6b-9622-4cf3-96fb-485676362d9f-kube-api-access-lwwb2\") pod \"coredns-6d4b75cb6d-pc6xj\" (UID: \"e8deca6b-9622-4cf3-96fb-485676362d9f\") " pod="kube-system/coredns-6d4b75cb6d-pc6xj"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277618    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume\") pod \"coredns-6d4b75cb6d-pc6xj\" (UID: \"e8deca6b-9622-4cf3-96fb-485676362d9f\") " pod="kube-system/coredns-6d4b75cb6d-pc6xj"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277638    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/479a7625-bb01-4e38-ba78-6f60b799a428-xtables-lock\") pod \"kube-proxy-pdzkg\" (UID: \"479a7625-bb01-4e38-ba78-6f60b799a428\") " pod="kube-system/kube-proxy-pdzkg"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277656    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jft8p\" (UniqueName: \"kubernetes.io/projected/479a7625-bb01-4e38-ba78-6f60b799a428-kube-api-access-jft8p\") pod \"kube-proxy-pdzkg\" (UID: \"479a7625-bb01-4e38-ba78-6f60b799a428\") " pod="kube-system/kube-proxy-pdzkg"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277674    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/479a7625-bb01-4e38-ba78-6f60b799a428-kube-proxy\") pod \"kube-proxy-pdzkg\" (UID: \"479a7625-bb01-4e38-ba78-6f60b799a428\") " pod="kube-system/kube-proxy-pdzkg"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277691    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pwt8\" (UniqueName: \"kubernetes.io/projected/501253f3-8ca1-4be7-9172-112d2d792fd2-kube-api-access-7pwt8\") pod \"storage-provisioner\" (UID: \"501253f3-8ca1-4be7-9172-112d2d792fd2\") " pod="kube-system/storage-provisioner"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: I1028 18:10:09.277704    1116 reconciler.go:159] "Reconciler: start to sync state"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: E1028 18:10:09.284990    1116 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: E1028 18:10:09.381070    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: E1028 18:10:09.381182    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume podName:e8deca6b-9622-4cf3-96fb-485676362d9f nodeName:}" failed. No retries permitted until 2024-10-28 18:10:09.881140113 +0000 UTC m=+5.789515591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume") pod "coredns-6d4b75cb6d-pc6xj" (UID: "e8deca6b-9622-4cf3-96fb-485676362d9f") : object "kube-system"/"coredns" not registered
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: E1028 18:10:09.885445    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 18:10:09 test-preload-598338 kubelet[1116]: E1028 18:10:09.885507    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume podName:e8deca6b-9622-4cf3-96fb-485676362d9f nodeName:}" failed. No retries permitted until 2024-10-28 18:10:10.885492706 +0000 UTC m=+6.793868169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume") pod "coredns-6d4b75cb6d-pc6xj" (UID: "e8deca6b-9622-4cf3-96fb-485676362d9f") : object "kube-system"/"coredns" not registered
	Oct 28 18:10:10 test-preload-598338 kubelet[1116]: E1028 18:10:10.893434    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 18:10:10 test-preload-598338 kubelet[1116]: E1028 18:10:10.893549    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume podName:e8deca6b-9622-4cf3-96fb-485676362d9f nodeName:}" failed. No retries permitted until 2024-10-28 18:10:12.89353228 +0000 UTC m=+8.801907749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume") pod "coredns-6d4b75cb6d-pc6xj" (UID: "e8deca6b-9622-4cf3-96fb-485676362d9f") : object "kube-system"/"coredns" not registered
	Oct 28 18:10:11 test-preload-598338 kubelet[1116]: E1028 18:10:11.324822    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-pc6xj" podUID=e8deca6b-9622-4cf3-96fb-485676362d9f
	Oct 28 18:10:12 test-preload-598338 kubelet[1116]: E1028 18:10:12.909254    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 18:10:12 test-preload-598338 kubelet[1116]: E1028 18:10:12.909390    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume podName:e8deca6b-9622-4cf3-96fb-485676362d9f nodeName:}" failed. No retries permitted until 2024-10-28 18:10:16.909370183 +0000 UTC m=+12.817745646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e8deca6b-9622-4cf3-96fb-485676362d9f-config-volume") pod "coredns-6d4b75cb6d-pc6xj" (UID: "e8deca6b-9622-4cf3-96fb-485676362d9f") : object "kube-system"/"coredns" not registered
	Oct 28 18:10:13 test-preload-598338 kubelet[1116]: E1028 18:10:13.325745    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-pc6xj" podUID=e8deca6b-9622-4cf3-96fb-485676362d9f
	
	
	==> storage-provisioner [754bba2b95617d38e2ee7e39c6fdb4865861619c2f12e5f7d86fc8ba07706e5b] <==
	I1028 18:10:10.014322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-598338 -n test-preload-598338
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-598338 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-598338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-598338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-598338: (1.109271603s)
--- FAIL: TestPreload (239.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (427.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m1.054497146s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-192352] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-192352" primary control-plane node in "kubernetes-upgrade-192352" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:12:15.048735   55416 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:12:15.048852   55416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:12:15.048862   55416 out.go:358] Setting ErrFile to fd 2...
	I1028 18:12:15.048866   55416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:12:15.049152   55416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:12:15.049714   55416 out.go:352] Setting JSON to false
	I1028 18:12:15.050547   55416 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6878,"bootTime":1730132257,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:12:15.050638   55416 start.go:139] virtualization: kvm guest
	I1028 18:12:15.052858   55416 out.go:177] * [kubernetes-upgrade-192352] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:12:15.054022   55416 notify.go:220] Checking for updates...
	I1028 18:12:15.055061   55416 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:12:15.057213   55416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:12:15.058623   55416 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:12:15.059724   55416 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:12:15.060754   55416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:12:15.061972   55416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:12:15.063308   55416 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:12:15.098519   55416 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 18:12:15.100166   55416 start.go:297] selected driver: kvm2
	I1028 18:12:15.100180   55416 start.go:901] validating driver "kvm2" against <nil>
	I1028 18:12:15.100191   55416 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:12:15.100897   55416 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:12:19.768346   55416 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:12:19.782987   55416 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:12:19.783040   55416 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 18:12:19.783368   55416 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 18:12:19.783402   55416 cni.go:84] Creating CNI manager for ""
	I1028 18:12:19.783454   55416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:12:19.783464   55416 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 18:12:19.783524   55416 start.go:340] cluster config:
	{Name:kubernetes-upgrade-192352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-192352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:12:19.783651   55416 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:12:19.785181   55416 out.go:177] * Starting "kubernetes-upgrade-192352" primary control-plane node in "kubernetes-upgrade-192352" cluster
	I1028 18:12:19.786239   55416 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:12:19.786274   55416 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 18:12:19.786286   55416 cache.go:56] Caching tarball of preloaded images
	I1028 18:12:19.786382   55416 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:12:19.786394   55416 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 18:12:19.786710   55416 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/config.json ...
	I1028 18:12:19.786743   55416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/config.json: {Name:mk5dcffe58ed6a54ed9c76af6f90a9b197e29f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:12:19.786885   55416 start.go:360] acquireMachinesLock for kubernetes-upgrade-192352: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:12:44.069114   55416 start.go:364] duration metric: took 24.282183807s to acquireMachinesLock for "kubernetes-upgrade-192352"
	I1028 18:12:44.069181   55416 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-192352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-192352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:12:44.069281   55416 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 18:12:44.071063   55416 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 18:12:44.071234   55416 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:12:44.071280   55416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:12:44.088164   55416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I1028 18:12:44.088650   55416 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:12:44.089314   55416 main.go:141] libmachine: Using API Version  1
	I1028 18:12:44.089349   55416 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:12:44.089756   55416 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:12:44.089925   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetMachineName
	I1028 18:12:44.090066   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:12:44.090250   55416 start.go:159] libmachine.API.Create for "kubernetes-upgrade-192352" (driver="kvm2")
	I1028 18:12:44.090289   55416 client.go:168] LocalClient.Create starting
	I1028 18:12:44.090322   55416 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 18:12:44.090378   55416 main.go:141] libmachine: Decoding PEM data...
	I1028 18:12:44.090399   55416 main.go:141] libmachine: Parsing certificate...
	I1028 18:12:44.090460   55416 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 18:12:44.090485   55416 main.go:141] libmachine: Decoding PEM data...
	I1028 18:12:44.090508   55416 main.go:141] libmachine: Parsing certificate...
	I1028 18:12:44.090538   55416 main.go:141] libmachine: Running pre-create checks...
	I1028 18:12:44.090550   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .PreCreateCheck
	I1028 18:12:44.090987   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetConfigRaw
	I1028 18:12:44.091521   55416 main.go:141] libmachine: Creating machine...
	I1028 18:12:44.091537   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .Create
	I1028 18:12:44.091725   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Creating KVM machine...
	I1028 18:12:44.092777   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found existing default KVM network
	I1028 18:12:44.093762   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:44.093607   55806 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:d8:7c} reservation:<nil>}
	I1028 18:12:44.094610   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:44.094524   55806 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fa50}
	I1028 18:12:44.094634   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | created network xml: 
	I1028 18:12:44.094645   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | <network>
	I1028 18:12:44.094662   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |   <name>mk-kubernetes-upgrade-192352</name>
	I1028 18:12:44.094677   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |   <dns enable='no'/>
	I1028 18:12:44.094687   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |   
	I1028 18:12:44.094698   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1028 18:12:44.094713   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |     <dhcp>
	I1028 18:12:44.094745   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1028 18:12:44.094765   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |     </dhcp>
	I1028 18:12:44.094779   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |   </ip>
	I1028 18:12:44.094789   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG |   
	I1028 18:12:44.094797   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | </network>
	I1028 18:12:44.094807   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | 
	I1028 18:12:44.099917   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | trying to create private KVM network mk-kubernetes-upgrade-192352 192.168.50.0/24...
	I1028 18:12:44.166922   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | private KVM network mk-kubernetes-upgrade-192352 192.168.50.0/24 created
	I1028 18:12:44.166959   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:44.166869   55806 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:12:44.166980   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352 ...
	I1028 18:12:44.166998   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 18:12:44.167117   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 18:12:44.414096   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:44.413946   55806 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa...
	I1028 18:12:44.530803   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:44.530652   55806 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/kubernetes-upgrade-192352.rawdisk...
	I1028 18:12:44.530842   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Writing magic tar header
	I1028 18:12:44.531327   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Writing SSH key tar header
	I1028 18:12:44.532096   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:44.532031   55806 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352 ...
	I1028 18:12:44.532155   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352
	I1028 18:12:44.532176   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 18:12:44.532269   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352 (perms=drwx------)
	I1028 18:12:44.532303   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:12:44.532319   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 18:12:44.532332   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 18:12:44.532351   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 18:12:44.532365   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 18:12:44.532378   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Checking permissions on dir: /home/jenkins
	I1028 18:12:44.532390   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Checking permissions on dir: /home
	I1028 18:12:44.532400   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Skipping /home - not owner
	I1028 18:12:44.532419   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 18:12:44.532431   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 18:12:44.532443   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 18:12:44.532490   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Creating domain...
	I1028 18:12:44.533534   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) define libvirt domain using xml: 
	I1028 18:12:44.533555   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) <domain type='kvm'>
	I1028 18:12:44.533578   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   <name>kubernetes-upgrade-192352</name>
	I1028 18:12:44.533594   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   <memory unit='MiB'>2200</memory>
	I1028 18:12:44.533603   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   <vcpu>2</vcpu>
	I1028 18:12:44.533611   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   <features>
	I1028 18:12:44.533619   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <acpi/>
	I1028 18:12:44.533626   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <apic/>
	I1028 18:12:44.533642   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <pae/>
	I1028 18:12:44.533652   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     
	I1028 18:12:44.533658   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   </features>
	I1028 18:12:44.533663   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   <cpu mode='host-passthrough'>
	I1028 18:12:44.533684   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   
	I1028 18:12:44.533702   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   </cpu>
	I1028 18:12:44.533712   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   <os>
	I1028 18:12:44.533727   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <type>hvm</type>
	I1028 18:12:44.533741   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <boot dev='cdrom'/>
	I1028 18:12:44.533747   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <boot dev='hd'/>
	I1028 18:12:44.533756   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <bootmenu enable='no'/>
	I1028 18:12:44.533763   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   </os>
	I1028 18:12:44.533781   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   <devices>
	I1028 18:12:44.533796   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <disk type='file' device='cdrom'>
	I1028 18:12:44.533825   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/boot2docker.iso'/>
	I1028 18:12:44.533835   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <target dev='hdc' bus='scsi'/>
	I1028 18:12:44.533843   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <readonly/>
	I1028 18:12:44.533853   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     </disk>
	I1028 18:12:44.533863   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <disk type='file' device='disk'>
	I1028 18:12:44.533877   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 18:12:44.533891   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/kubernetes-upgrade-192352.rawdisk'/>
	I1028 18:12:44.533898   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <target dev='hda' bus='virtio'/>
	I1028 18:12:44.533907   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     </disk>
	I1028 18:12:44.533914   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <interface type='network'>
	I1028 18:12:44.533924   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <source network='mk-kubernetes-upgrade-192352'/>
	I1028 18:12:44.533932   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <model type='virtio'/>
	I1028 18:12:44.533940   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     </interface>
	I1028 18:12:44.533948   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <interface type='network'>
	I1028 18:12:44.533965   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <source network='default'/>
	I1028 18:12:44.533983   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <model type='virtio'/>
	I1028 18:12:44.534023   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     </interface>
	I1028 18:12:44.534031   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <serial type='pty'>
	I1028 18:12:44.534038   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <target port='0'/>
	I1028 18:12:44.534044   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     </serial>
	I1028 18:12:44.534052   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <console type='pty'>
	I1028 18:12:44.534066   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <target type='serial' port='0'/>
	I1028 18:12:44.534075   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     </console>
	I1028 18:12:44.534082   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     <rng model='virtio'>
	I1028 18:12:44.534091   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)       <backend model='random'>/dev/random</backend>
	I1028 18:12:44.534099   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     </rng>
	I1028 18:12:44.534108   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     
	I1028 18:12:44.534115   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)     
	I1028 18:12:44.534132   55416 main.go:141] libmachine: (kubernetes-upgrade-192352)   </devices>
	I1028 18:12:44.534144   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) </domain>
	I1028 18:12:44.534154   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) 
	I1028 18:12:44.538070   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:1d:7c:b3 in network default
	I1028 18:12:44.538713   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Ensuring networks are active...
	I1028 18:12:44.538738   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:44.539419   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Ensuring network default is active
	I1028 18:12:44.539741   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Ensuring network mk-kubernetes-upgrade-192352 is active
	I1028 18:12:44.540395   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Getting domain xml...
	I1028 18:12:44.541229   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Creating domain...
	I1028 18:12:45.836627   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Waiting to get IP...
	I1028 18:12:45.837629   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:45.838088   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:45.838114   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:45.837996   55806 retry.go:31] will retry after 189.99971ms: waiting for machine to come up
	I1028 18:12:46.029535   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:46.030032   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:46.030064   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:46.029999   55806 retry.go:31] will retry after 317.957135ms: waiting for machine to come up
	I1028 18:12:46.349674   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:46.350179   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:46.350206   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:46.350146   55806 retry.go:31] will retry after 435.280437ms: waiting for machine to come up
	I1028 18:12:46.786748   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:46.787246   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:46.787270   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:46.787206   55806 retry.go:31] will retry after 480.140563ms: waiting for machine to come up
	I1028 18:12:47.269068   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:47.269557   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:47.269582   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:47.269511   55806 retry.go:31] will retry after 496.067399ms: waiting for machine to come up
	I1028 18:12:47.767230   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:47.767731   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:47.767760   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:47.767680   55806 retry.go:31] will retry after 633.169935ms: waiting for machine to come up
	I1028 18:12:48.402480   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:48.402960   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:48.402994   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:48.402883   55806 retry.go:31] will retry after 1.079830562s: waiting for machine to come up
	I1028 18:12:49.484214   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:49.484680   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:49.484705   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:49.484645   55806 retry.go:31] will retry after 1.266228524s: waiting for machine to come up
	I1028 18:12:50.752072   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:50.752529   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:50.752561   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:50.752457   55806 retry.go:31] will retry after 1.229734183s: waiting for machine to come up
	I1028 18:12:51.983475   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:51.983872   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:51.983899   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:51.983831   55806 retry.go:31] will retry after 1.900139149s: waiting for machine to come up
	I1028 18:12:53.887028   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:53.887443   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:53.887497   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:53.887411   55806 retry.go:31] will retry after 2.545730405s: waiting for machine to come up
	I1028 18:12:56.434528   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:56.435050   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:56.435082   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:56.434991   55806 retry.go:31] will retry after 3.368726084s: waiting for machine to come up
	I1028 18:12:59.805513   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:12:59.806008   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:12:59.806038   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:12:59.805966   55806 retry.go:31] will retry after 3.50590888s: waiting for machine to come up
	I1028 18:13:03.312992   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:03.313475   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find current IP address of domain kubernetes-upgrade-192352 in network mk-kubernetes-upgrade-192352
	I1028 18:13:03.313500   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | I1028 18:13:03.313416   55806 retry.go:31] will retry after 4.340789999s: waiting for machine to come up
	I1028 18:13:07.657498   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.657974   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Found IP for machine: 192.168.50.62
	I1028 18:13:07.658002   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has current primary IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.658010   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Reserving static IP address...
	I1028 18:13:07.658514   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-192352", mac: "52:54:00:35:b0:c5", ip: "192.168.50.62"} in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.735210   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Reserved static IP address: 192.168.50.62
	I1028 18:13:07.735239   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Waiting for SSH to be available...
	I1028 18:13:07.735260   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Getting to WaitForSSH function...
	I1028 18:13:07.738768   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.739246   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:07.739280   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.739427   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Using SSH client type: external
	I1028 18:13:07.739457   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa (-rw-------)
	I1028 18:13:07.739487   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:13:07.739522   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | About to run SSH command:
	I1028 18:13:07.739553   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | exit 0
	I1028 18:13:07.872628   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | SSH cmd err, output: <nil>: 
	I1028 18:13:07.872880   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) KVM machine creation complete!
	I1028 18:13:07.873288   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetConfigRaw
	I1028 18:13:07.873890   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:13:07.874097   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:13:07.874279   55416 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 18:13:07.874293   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetState
	I1028 18:13:07.875515   55416 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 18:13:07.875528   55416 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 18:13:07.875535   55416 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 18:13:07.875542   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:07.878131   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.878515   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:07.878554   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.878682   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:07.878870   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:07.879023   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:07.879201   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:07.879350   55416 main.go:141] libmachine: Using SSH client type: native
	I1028 18:13:07.879572   55416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:13:07.879587   55416 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 18:13:07.987912   55416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:13:07.987933   55416 main.go:141] libmachine: Detecting the provisioner...
	I1028 18:13:07.987942   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:07.991103   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.991579   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:07.991611   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:07.991804   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:07.992037   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:07.992223   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:07.992395   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:07.992585   55416 main.go:141] libmachine: Using SSH client type: native
	I1028 18:13:07.992809   55416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:13:07.992826   55416 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 18:13:08.101213   55416 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 18:13:08.101362   55416 main.go:141] libmachine: found compatible host: buildroot
	I1028 18:13:08.101379   55416 main.go:141] libmachine: Provisioning with buildroot...
	I1028 18:13:08.101389   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetMachineName
	I1028 18:13:08.101653   55416 buildroot.go:166] provisioning hostname "kubernetes-upgrade-192352"
	I1028 18:13:08.101681   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetMachineName
	I1028 18:13:08.101890   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:08.104699   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.105056   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.105086   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.105225   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:08.105383   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.105522   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.105670   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:08.105827   55416 main.go:141] libmachine: Using SSH client type: native
	I1028 18:13:08.106030   55416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:13:08.106043   55416 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-192352 && echo "kubernetes-upgrade-192352" | sudo tee /etc/hostname
	I1028 18:13:08.231148   55416 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-192352
	
	I1028 18:13:08.231198   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:08.233675   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.233953   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.233993   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.234196   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:08.234388   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.234537   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.234690   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:08.234847   55416 main.go:141] libmachine: Using SSH client type: native
	I1028 18:13:08.235048   55416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:13:08.235074   55416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-192352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-192352/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-192352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:13:08.349527   55416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:13:08.349560   55416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:13:08.349606   55416 buildroot.go:174] setting up certificates
	I1028 18:13:08.349629   55416 provision.go:84] configureAuth start
	I1028 18:13:08.349644   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetMachineName
	I1028 18:13:08.349937   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetIP
	I1028 18:13:08.353073   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.353500   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.353532   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.353723   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:08.356206   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.356646   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.356673   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.356810   55416 provision.go:143] copyHostCerts
	I1028 18:13:08.356878   55416 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:13:08.356895   55416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:13:08.356957   55416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:13:08.357072   55416 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:13:08.357082   55416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:13:08.357113   55416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:13:08.357185   55416 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:13:08.357195   55416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:13:08.357221   55416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:13:08.357285   55416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-192352 san=[127.0.0.1 192.168.50.62 kubernetes-upgrade-192352 localhost minikube]
	I1028 18:13:08.437087   55416 provision.go:177] copyRemoteCerts
	I1028 18:13:08.437160   55416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:13:08.437191   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:08.440131   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.440557   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.440590   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.440755   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:08.440945   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.441114   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:08.441250   55416 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa Username:docker}
	I1028 18:13:08.528638   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1028 18:13:08.554554   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:13:08.579162   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:13:08.603737   55416 provision.go:87] duration metric: took 254.094728ms to configureAuth
	I1028 18:13:08.603762   55416 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:13:08.603937   55416 config.go:182] Loaded profile config "kubernetes-upgrade-192352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:13:08.604031   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:08.606501   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.606813   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.606839   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.607136   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:08.607343   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.607537   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.607675   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:08.607823   55416 main.go:141] libmachine: Using SSH client type: native
	I1028 18:13:08.607986   55416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:13:08.607999   55416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:13:08.843018   55416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:13:08.843062   55416 main.go:141] libmachine: Checking connection to Docker...
	I1028 18:13:08.843071   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetURL
	I1028 18:13:08.844199   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Using libvirt version 6000000
	I1028 18:13:08.846117   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.846451   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.846473   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.846642   55416 main.go:141] libmachine: Docker is up and running!
	I1028 18:13:08.846656   55416 main.go:141] libmachine: Reticulating splines...
	I1028 18:13:08.846662   55416 client.go:171] duration metric: took 24.756363092s to LocalClient.Create
	I1028 18:13:08.846681   55416 start.go:167] duration metric: took 24.756433147s to libmachine.API.Create "kubernetes-upgrade-192352"
	I1028 18:13:08.846700   55416 start.go:293] postStartSetup for "kubernetes-upgrade-192352" (driver="kvm2")
	I1028 18:13:08.846711   55416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:13:08.846726   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:13:08.846953   55416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:13:08.846995   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:08.848861   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.849202   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.849227   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.849412   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:08.849581   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.849737   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:08.849868   55416 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa Username:docker}
	I1028 18:13:08.932422   55416 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:13:08.937134   55416 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:13:08.937158   55416 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:13:08.937224   55416 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:13:08.937298   55416 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:13:08.937380   55416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:13:08.947102   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:13:08.969484   55416 start.go:296] duration metric: took 122.770263ms for postStartSetup
	I1028 18:13:08.969535   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetConfigRaw
	I1028 18:13:08.970120   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetIP
	I1028 18:13:08.972921   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.973314   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.973348   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.973548   55416 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/config.json ...
	I1028 18:13:08.973748   55416 start.go:128] duration metric: took 24.904444599s to createHost
	I1028 18:13:08.973781   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:08.975875   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.976183   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:08.976211   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:08.976320   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:08.976508   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.976630   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:08.976766   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:08.976899   55416 main.go:141] libmachine: Using SSH client type: native
	I1028 18:13:08.977058   55416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:13:08.977068   55416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:13:09.085173   55416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730139189.049582567
	
	I1028 18:13:09.085195   55416 fix.go:216] guest clock: 1730139189.049582567
	I1028 18:13:09.085201   55416 fix.go:229] Guest: 2024-10-28 18:13:09.049582567 +0000 UTC Remote: 2024-10-28 18:13:08.973766741 +0000 UTC m=+53.971221140 (delta=75.815826ms)
	I1028 18:13:09.085218   55416 fix.go:200] guest clock delta is within tolerance: 75.815826ms
	I1028 18:13:09.085222   55416 start.go:83] releasing machines lock for "kubernetes-upgrade-192352", held for 25.016078386s
	I1028 18:13:09.085245   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:13:09.085501   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetIP
	I1028 18:13:09.088338   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:09.088698   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:09.088727   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:09.088860   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:13:09.089322   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:13:09.089528   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:13:09.089622   55416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:13:09.089661   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:09.089742   55416 ssh_runner.go:195] Run: cat /version.json
	I1028 18:13:09.089757   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:13:09.092360   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:09.092522   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:09.092688   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:09.092716   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:09.092883   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:09.092903   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:09.092936   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:09.093058   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:13:09.093106   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:09.093275   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:09.093286   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:13:09.093440   55416 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa Username:docker}
	I1028 18:13:09.093483   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:13:09.093595   55416 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa Username:docker}
	I1028 18:13:09.169619   55416 ssh_runner.go:195] Run: systemctl --version
	I1028 18:13:09.197125   55416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:13:09.363541   55416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:13:09.372114   55416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:13:09.372178   55416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:13:09.393222   55416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:13:09.393245   55416 start.go:495] detecting cgroup driver to use...
	I1028 18:13:09.393302   55416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:13:09.410092   55416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:13:09.424081   55416 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:13:09.424128   55416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:13:09.438130   55416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:13:09.451046   55416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:13:09.569185   55416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:13:09.736752   55416 docker.go:233] disabling docker service ...
	I1028 18:13:09.736813   55416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:13:09.751014   55416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:13:09.763819   55416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:13:09.884365   55416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:13:10.005353   55416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:13:10.019505   55416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:13:10.038048   55416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:13:10.038105   55416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:13:10.048451   55416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:13:10.048520   55416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:13:10.059322   55416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:13:10.069321   55416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:13:10.080088   55416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:13:10.090821   55416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:13:10.099778   55416 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:13:10.099833   55416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:13:10.112019   55416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:13:10.121692   55416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:13:10.231177   55416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:13:10.538734   55416 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:13:10.538816   55416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:13:10.544136   55416 start.go:563] Will wait 60s for crictl version
	I1028 18:13:10.544204   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:10.548548   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:13:10.594763   55416 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:13:10.594830   55416 ssh_runner.go:195] Run: crio --version
	I1028 18:13:10.625152   55416 ssh_runner.go:195] Run: crio --version
	I1028 18:13:10.654837   55416 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:13:10.656034   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetIP
	I1028 18:13:10.659234   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:10.659636   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:12:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:13:10.659659   55416 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:13:10.659870   55416 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:13:10.664174   55416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:13:10.676535   55416 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-192352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-192352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:13:10.676656   55416 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:13:10.676713   55416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:13:10.708844   55416 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:13:10.708923   55416 ssh_runner.go:195] Run: which lz4
	I1028 18:13:10.713237   55416 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:13:10.718988   55416 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:13:10.719017   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:13:12.376802   55416 crio.go:462] duration metric: took 1.663608126s to copy over tarball
	I1028 18:13:12.376894   55416 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:13:14.960048   55416 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583113409s)
	I1028 18:13:14.960082   55416 crio.go:469] duration metric: took 2.58324448s to extract the tarball
	I1028 18:13:14.960092   55416 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:13:15.002793   55416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:13:15.048611   55416 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:13:15.048637   55416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:13:15.048697   55416 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:13:15.048718   55416 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:13:15.048741   55416 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:13:15.048792   55416 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:13:15.048781   55416 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:13:15.048835   55416 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:13:15.048832   55416 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:13:15.048741   55416 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:13:15.050482   55416 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:13:15.050498   55416 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:13:15.050502   55416 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:13:15.050525   55416 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:13:15.050537   55416 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:13:15.050542   55416 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:13:15.050553   55416 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:13:15.050534   55416 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:13:15.234121   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:13:15.243982   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:13:15.246633   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:13:15.250108   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:13:15.271277   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:13:15.287191   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:13:15.306003   55416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:13:15.306047   55416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:13:15.306092   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:15.370949   55416 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:13:15.370976   55416 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:13:15.370991   55416 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:13:15.371007   55416 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:13:15.371036   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:15.371046   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:15.371056   55416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:13:15.371090   55416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:13:15.371128   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:15.395382   55416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:13:15.395421   55416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:13:15.395470   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:15.401542   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:13:15.408687   55416 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:13:15.408702   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:13:15.408726   55416 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:13:15.408755   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:13:15.408756   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:15.408825   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:13:15.408926   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:13:15.409027   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:13:15.549902   55416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:13:15.549974   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:13:15.549992   55416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:13:15.550004   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:13:15.550041   55416 ssh_runner.go:195] Run: which crictl
	I1028 18:13:15.550118   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:13:15.550130   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:13:15.550192   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:13:15.554424   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:13:15.681775   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:13:15.688612   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:13:15.688629   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:13:15.688694   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:13:15.688727   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:13:15.688800   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:13:15.688827   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:13:15.807990   55416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:13:15.849106   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:13:15.849123   55416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:13:15.849203   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:13:15.849220   55416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:13:15.849293   55416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:13:15.849338   55416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:13:15.893036   55416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:13:15.896569   55416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:13:15.928397   55416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:13:17.293442   55416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:13:17.430392   55416 cache_images.go:92] duration metric: took 2.381736342s to LoadCachedImages
	W1028 18:13:17.430499   55416 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1028 18:13:17.430522   55416 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.20.0 crio true true} ...
	I1028 18:13:17.430655   55416 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-192352 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-192352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:13:17.430753   55416 ssh_runner.go:195] Run: crio config
	I1028 18:13:17.479145   55416 cni.go:84] Creating CNI manager for ""
	I1028 18:13:17.479176   55416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:13:17.479187   55416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:13:17.479212   55416 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-192352 NodeName:kubernetes-upgrade-192352 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:13:17.479374   55416 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-192352"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:13:17.479430   55416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:13:17.489206   55416 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:13:17.489280   55416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:13:17.499423   55416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1028 18:13:17.517913   55416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:13:17.534675   55416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:13:17.551947   55416 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I1028 18:13:17.556141   55416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:13:17.568178   55416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:13:17.704245   55416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:13:17.722197   55416 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352 for IP: 192.168.50.62
	I1028 18:13:17.722216   55416 certs.go:194] generating shared ca certs ...
	I1028 18:13:17.722231   55416 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:13:17.722395   55416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:13:17.722463   55416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:13:17.722475   55416 certs.go:256] generating profile certs ...
	I1028 18:13:17.722542   55416 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.key
	I1028 18:13:17.722561   55416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.crt with IP's: []
	I1028 18:13:17.824275   55416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.crt ...
	I1028 18:13:17.824313   55416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.crt: {Name:mk8ed198c11576af31714930558be2e751f8a7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:13:17.824542   55416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.key ...
	I1028 18:13:17.824567   55416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.key: {Name:mke6146f1c834825b41cc4258513eb5bc058aef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:13:17.824699   55416 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.key.1473bd0a
	I1028 18:13:17.824731   55416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.crt.1473bd0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.62]
	I1028 18:13:17.892658   55416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.crt.1473bd0a ...
	I1028 18:13:17.892688   55416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.crt.1473bd0a: {Name:mkef922c9733e7c3a065bee17d0f3ccfa5f9c416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:13:17.892868   55416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.key.1473bd0a ...
	I1028 18:13:17.892884   55416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.key.1473bd0a: {Name:mk4f4f02e43ee58687ea54de81ea77db2ad7cf7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:13:17.892977   55416 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.crt.1473bd0a -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.crt
	I1028 18:13:17.893068   55416 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.key.1473bd0a -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.key
	I1028 18:13:17.893132   55416 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.key
	I1028 18:13:17.893148   55416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.crt with IP's: []
	I1028 18:13:17.974914   55416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.crt ...
	I1028 18:13:17.974940   55416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.crt: {Name:mkc307e3da0d45341e96245ba18b90ae7f55877e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:13:17.975089   55416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.key ...
	I1028 18:13:17.975102   55416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.key: {Name:mk3b946621de47e5a4290d054924b01179903520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:13:17.975269   55416 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:13:17.975308   55416 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:13:17.975318   55416 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:13:17.975342   55416 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:13:17.975371   55416 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:13:17.975392   55416 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:13:17.975427   55416 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:13:17.975986   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:13:18.004838   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:13:18.032576   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:13:18.056680   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:13:18.080489   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 18:13:18.104518   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:13:18.173767   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:13:18.198184   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:13:18.221991   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:13:18.246248   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:13:18.269962   55416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:13:18.293964   55416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:13:18.312038   55416 ssh_runner.go:195] Run: openssl version
	I1028 18:13:18.318186   55416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:13:18.329834   55416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:13:18.334490   55416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:13:18.334542   55416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:13:18.340406   55416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:13:18.350687   55416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:13:18.361128   55416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:13:18.365567   55416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:13:18.365622   55416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:13:18.371210   55416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:13:18.381297   55416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:13:18.392871   55416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:13:18.397177   55416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:13:18.397258   55416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:13:18.402744   55416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:13:18.412850   55416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:13:18.417160   55416 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 18:13:18.417218   55416 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-192352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-192352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:13:18.417295   55416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:13:18.417347   55416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:13:18.458893   55416 cri.go:89] found id: ""
	I1028 18:13:18.458984   55416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:13:18.469790   55416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:13:18.479197   55416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:13:18.488445   55416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:13:18.488465   55416 kubeadm.go:157] found existing configuration files:
	
	I1028 18:13:18.488520   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:13:18.497354   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:13:18.497423   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:13:18.506354   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:13:18.515097   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:13:18.515148   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:13:18.525074   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:13:18.533611   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:13:18.533672   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:13:18.542916   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:13:18.551720   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:13:18.551763   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:13:18.560717   55416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:13:18.878524   55416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:15:17.232419   55416 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:15:17.232575   55416 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:15:17.234059   55416 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:15:17.234103   55416 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:15:17.234191   55416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:15:17.234330   55416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:15:17.234470   55416 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:15:17.234557   55416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:15:17.236435   55416 out.go:235]   - Generating certificates and keys ...
	I1028 18:15:17.236531   55416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:15:17.236601   55416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:15:17.236698   55416 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 18:15:17.236786   55416 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 18:15:17.236871   55416 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 18:15:17.236935   55416 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 18:15:17.236994   55416 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 18:15:17.237103   55416 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-192352 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I1028 18:15:17.237149   55416 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 18:15:17.237277   55416 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-192352 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I1028 18:15:17.237379   55416 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 18:15:17.237467   55416 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 18:15:17.237534   55416 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 18:15:17.237611   55416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:15:17.237683   55416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:15:17.237758   55416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:15:17.237849   55416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:15:17.237937   55416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:15:17.238033   55416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:15:17.238113   55416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:15:17.238148   55416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:15:17.238204   55416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:15:17.239519   55416 out.go:235]   - Booting up control plane ...
	I1028 18:15:17.239627   55416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:15:17.239732   55416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:15:17.239801   55416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:15:17.239876   55416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:15:17.240030   55416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:15:17.240091   55416 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:15:17.240178   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:15:17.240402   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:15:17.240523   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:15:17.240712   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:15:17.240814   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:15:17.241050   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:15:17.241124   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:15:17.241307   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:15:17.241397   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:15:17.241597   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:15:17.241606   55416 kubeadm.go:310] 
	I1028 18:15:17.241657   55416 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:15:17.241696   55416 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:15:17.241707   55416 kubeadm.go:310] 
	I1028 18:15:17.241749   55416 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:15:17.241778   55416 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:15:17.241875   55416 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:15:17.241886   55416 kubeadm.go:310] 
	I1028 18:15:17.242029   55416 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:15:17.242083   55416 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:15:17.242129   55416 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:15:17.242139   55416 kubeadm.go:310] 
	I1028 18:15:17.242298   55416 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:15:17.242409   55416 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:15:17.242417   55416 kubeadm.go:310] 
	I1028 18:15:17.242500   55416 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:15:17.242596   55416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:15:17.242693   55416 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:15:17.242786   55416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:15:17.242799   55416 kubeadm.go:310] 
	W1028 18:15:17.242934   55416 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-192352 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-192352 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-192352 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-192352 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:15:17.242979   55416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:15:18.847744   55416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.604744328s)
	I1028 18:15:18.847814   55416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:15:18.861762   55416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:15:18.871144   55416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:15:18.871165   55416 kubeadm.go:157] found existing configuration files:
	
	I1028 18:15:18.871205   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:15:18.880127   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:15:18.880175   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:15:18.888954   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:15:18.897476   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:15:18.897521   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:15:18.906166   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:15:18.914536   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:15:18.914578   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:15:18.923643   55416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:15:18.931954   55416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:15:18.932006   55416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:15:18.940848   55416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:15:19.007870   55416 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:15:19.007937   55416 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:15:19.137194   55416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:15:19.137353   55416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:15:19.137488   55416 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:15:19.326455   55416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:15:19.328478   55416 out.go:235]   - Generating certificates and keys ...
	I1028 18:15:19.328582   55416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:15:19.328642   55416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:15:19.328706   55416 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:15:19.328755   55416 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:15:19.328812   55416 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:15:19.328879   55416 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:15:19.328945   55416 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:15:19.329003   55416 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:15:19.329076   55416 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:15:19.329152   55416 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:15:19.329207   55416 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:15:19.329258   55416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:15:19.512061   55416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:15:19.881707   55416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:15:20.030234   55416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:15:20.232461   55416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:15:20.249181   55416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:15:20.249297   55416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:15:20.249333   55416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:15:20.391397   55416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:15:20.392967   55416 out.go:235]   - Booting up control plane ...
	I1028 18:15:20.393083   55416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:15:20.403282   55416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:15:20.406261   55416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:15:20.406391   55416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:15:20.407289   55416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:16:00.405434   55416 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:16:00.405925   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:16:00.406191   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:16:05.406160   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:16:05.406369   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:16:15.406330   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:16:15.406546   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:16:35.407076   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:16:35.407252   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:17:15.409032   55416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:17:15.409373   55416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:17:15.409389   55416 kubeadm.go:310] 
	I1028 18:17:15.409482   55416 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:17:15.409566   55416 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:17:15.409576   55416 kubeadm.go:310] 
	I1028 18:17:15.409633   55416 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:17:15.409687   55416 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:17:15.409823   55416 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:17:15.409833   55416 kubeadm.go:310] 
	I1028 18:17:15.410032   55416 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:17:15.410086   55416 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:17:15.410132   55416 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:17:15.410140   55416 kubeadm.go:310] 
	I1028 18:17:15.410261   55416 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:17:15.410379   55416 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:17:15.410399   55416 kubeadm.go:310] 
	I1028 18:17:15.410524   55416 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:17:15.410630   55416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:17:15.410718   55416 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:17:15.410805   55416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:17:15.410821   55416 kubeadm.go:310] 
	I1028 18:17:15.411581   55416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:17:15.411690   55416 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:17:15.411785   55416 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:17:15.411861   55416 kubeadm.go:394] duration metric: took 3m56.994645147s to StartCluster
	I1028 18:17:15.411906   55416 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:17:15.411970   55416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:17:15.462603   55416 cri.go:89] found id: ""
	I1028 18:17:15.462630   55416 logs.go:282] 0 containers: []
	W1028 18:17:15.462638   55416 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:17:15.462645   55416 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:17:15.462700   55416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:17:15.496362   55416 cri.go:89] found id: ""
	I1028 18:17:15.496392   55416 logs.go:282] 0 containers: []
	W1028 18:17:15.496403   55416 logs.go:284] No container was found matching "etcd"
	I1028 18:17:15.496411   55416 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:17:15.496492   55416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:17:15.534858   55416 cri.go:89] found id: ""
	I1028 18:17:15.534893   55416 logs.go:282] 0 containers: []
	W1028 18:17:15.534905   55416 logs.go:284] No container was found matching "coredns"
	I1028 18:17:15.534914   55416 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:17:15.534971   55416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:17:15.593446   55416 cri.go:89] found id: ""
	I1028 18:17:15.593472   55416 logs.go:282] 0 containers: []
	W1028 18:17:15.593484   55416 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:17:15.593491   55416 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:17:15.593552   55416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:17:15.627910   55416 cri.go:89] found id: ""
	I1028 18:17:15.627936   55416 logs.go:282] 0 containers: []
	W1028 18:17:15.627946   55416 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:17:15.627953   55416 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:17:15.628020   55416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:17:15.664094   55416 cri.go:89] found id: ""
	I1028 18:17:15.664119   55416 logs.go:282] 0 containers: []
	W1028 18:17:15.664130   55416 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:17:15.664137   55416 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:17:15.664205   55416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:17:15.706129   55416 cri.go:89] found id: ""
	I1028 18:17:15.706160   55416 logs.go:282] 0 containers: []
	W1028 18:17:15.706171   55416 logs.go:284] No container was found matching "kindnet"
	I1028 18:17:15.706183   55416 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:17:15.706198   55416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:17:15.809416   55416 logs.go:123] Gathering logs for container status ...
	I1028 18:17:15.809448   55416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:17:15.849225   55416 logs.go:123] Gathering logs for kubelet ...
	I1028 18:17:15.849251   55416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:17:15.900800   55416 logs.go:123] Gathering logs for dmesg ...
	I1028 18:17:15.900830   55416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:17:15.914357   55416 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:17:15.914388   55416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:17:16.041134   55416 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1028 18:17:16.041163   55416 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:17:16.041207   55416 out.go:270] * 
	* 
	W1028 18:17:16.041260   55416 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:17:16.041275   55416 out.go:270] * 
	* 
	W1028 18:17:16.042046   55416 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:17:16.045147   55416 out.go:201] 
	W1028 18:17:16.046339   55416 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:17:16.046373   55416 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:17:16.046402   55416 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:17:16.048442   55416 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-192352
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-192352: (6.311642487s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-192352 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-192352 status --format={{.Host}}: exit status 7 (76.418546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.737564537s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-192352 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (87.449263ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-192352] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-192352
	    minikube start -p kubernetes-upgrade-192352 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1923522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-192352 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1028 18:18:38.394911   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-192352 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.646719601s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-28 18:19:19.030478167 +0000 UTC m=+4398.488522484
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-192352 -n kubernetes-upgrade-192352
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-192352 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-192352 logs -n 25: (1.626358508s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-889327          | force-systemd-flag-889327 | jenkins | v1.34.0 | 28 Oct 24 18:15 UTC | 28 Oct 24 18:16 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-165190             | stopped-upgrade-165190    | jenkins | v1.34.0 | 28 Oct 24 18:15 UTC | 28 Oct 24 18:15 UTC |
	| start   | -p force-systemd-env-806978           | force-systemd-env-806978  | jenkins | v1.34.0 | 28 Oct 24 18:15 UTC | 28 Oct 24 18:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-793119 sudo           | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:15 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-793119                | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:15 UTC | 28 Oct 24 18:15 UTC |
	| start   | -p NoKubernetes-793119                | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:15 UTC | 28 Oct 24 18:16 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-889327 ssh cat     | force-systemd-flag-889327 | jenkins | v1.34.0 | 28 Oct 24 18:16 UTC | 28 Oct 24 18:16 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-889327          | force-systemd-flag-889327 | jenkins | v1.34.0 | 28 Oct 24 18:16 UTC | 28 Oct 24 18:16 UTC |
	| start   | -p running-upgrade-703793             | minikube                  | jenkins | v1.26.0 | 28 Oct 24 18:16 UTC | 28 Oct 24 18:17 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-806978           | force-systemd-env-806978  | jenkins | v1.34.0 | 28 Oct 24 18:16 UTC | 28 Oct 24 18:16 UTC |
	| start   | -p cert-expiration-559364             | cert-expiration-559364    | jenkins | v1.34.0 | 28 Oct 24 18:16 UTC | 28 Oct 24 18:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-793119 sudo           | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:16 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-793119                | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:16 UTC | 28 Oct 24 18:16 UTC |
	| start   | -p cert-options-040988                | cert-options-040988       | jenkins | v1.34.0 | 28 Oct 24 18:16 UTC | 28 Oct 24 18:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-192352          | kubernetes-upgrade-192352 | jenkins | v1.34.0 | 28 Oct 24 18:17 UTC | 28 Oct 24 18:17 UTC |
	| start   | -p kubernetes-upgrade-192352          | kubernetes-upgrade-192352 | jenkins | v1.34.0 | 28 Oct 24 18:17 UTC | 28 Oct 24 18:18 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-703793             | running-upgrade-703793    | jenkins | v1.34.0 | 28 Oct 24 18:17 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-040988 ssh               | cert-options-040988       | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:18 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-040988 -- sudo        | cert-options-040988       | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:18 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-040988                | cert-options-040988       | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:18 UTC |
	| start   | -p old-k8s-version-223868             | old-k8s-version-223868    | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-192352          | kubernetes-upgrade-192352 | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-192352          | kubernetes-upgrade-192352 | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-703793             | running-upgrade-703793    | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p no-preload-051152                  | no-preload-051152         | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:19:16
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:19:16.156289   64081 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:19:16.156402   64081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:19:16.156413   64081 out.go:358] Setting ErrFile to fd 2...
	I1028 18:19:16.156418   64081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:19:16.156632   64081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:19:16.157249   64081 out.go:352] Setting JSON to false
	I1028 18:19:16.158234   64081 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7299,"bootTime":1730132257,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:19:16.158322   64081 start.go:139] virtualization: kvm guest
	I1028 18:19:16.160284   64081 out.go:177] * [no-preload-051152] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:19:16.161719   64081 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:19:16.161731   64081 notify.go:220] Checking for updates...
	I1028 18:19:16.164075   64081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:19:16.165185   64081 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:19:16.166353   64081 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:19:16.167536   64081 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:19:16.168623   64081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:19:16.170066   64081 config.go:182] Loaded profile config "cert-expiration-559364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:19:16.170183   64081 config.go:182] Loaded profile config "kubernetes-upgrade-192352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:19:16.170281   64081 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:19:16.170371   64081 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:19:16.206034   64081 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 18:19:16.207272   64081 start.go:297] selected driver: kvm2
	I1028 18:19:16.207286   64081 start.go:901] validating driver "kvm2" against <nil>
	I1028 18:19:16.207295   64081 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:19:16.207974   64081 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.208043   64081 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:19:16.222266   64081 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:19:16.222309   64081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 18:19:16.222560   64081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:19:16.222594   64081 cni.go:84] Creating CNI manager for ""
	I1028 18:19:16.222647   64081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:19:16.222658   64081 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 18:19:16.222717   64081 start.go:340] cluster config:
	{Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:19:16.222839   64081 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.224540   64081 out.go:177] * Starting "no-preload-051152" primary control-plane node in "no-preload-051152" cluster
	I1028 18:19:16.225709   64081 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:19:16.225797   64081 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:19:16.225820   64081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json: {Name:mk21ddbee76c72b49d9bd6c714fe7eb480fe0654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:19:16.225906   64081 cache.go:107] acquiring lock: {Name:mk665d68af9f3465ff87cc790ff515d5754875cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.225945   64081 cache.go:107] acquiring lock: {Name:mk19f1b990be4d6c17796e45e92c3a01386cc62a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.225963   64081 start.go:360] acquireMachinesLock for no-preload-051152: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:19:16.225987   64081 cache.go:115] /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1028 18:19:16.226003   64081 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.175µs
	I1028 18:19:16.225994   64081 cache.go:107] acquiring lock: {Name:mkd0b3d6ca1728d44541b06820b87f76f474a358 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.226011   64081 start.go:364] duration metric: took 34.284µs to acquireMachinesLock for "no-preload-051152"
	I1028 18:19:16.225952   64081 cache.go:107] acquiring lock: {Name:mk9a2035af3ad657c066b9ab878ba055eece2d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.225996   64081 cache.go:107] acquiring lock: {Name:mkc4c32c3f1e13d334c3eb3101b14e4a2df35749 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.225909   64081 cache.go:107] acquiring lock: {Name:mkd63d66b6a594b062ef133bcc08356b5baba2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.226079   64081 cache.go:107] acquiring lock: {Name:mk644ca6f39244e55d22945aefb28b3ce6f7c113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.226095   64081 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:19:16.226022   64081 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1028 18:19:16.226116   64081 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:19:16.226036   64081 start.go:93] Provisioning new machine with config: &{Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:19:16.226101   64081 cache.go:107] acquiring lock: {Name:mk2f52e3e6dac08d8e15d7507c1597b6d7da2147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:19:16.226159   64081 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 18:19:16.226187   64081 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:19:16.226208   64081 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 18:19:16.226213   64081 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:19:12.782906   63714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:19:13.282365   63714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:19:13.298235   63714 api_server.go:72] duration metric: took 1.016222528s to wait for apiserver process to appear ...
	I1028 18:19:13.298261   63714 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:19:13.298283   63714 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:19:15.578877   63714 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:19:15.578919   63714 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:19:15.578939   63714 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:19:15.625863   63714 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:19:15.625906   63714 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:19:15.799008   63714 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:19:15.808279   63714 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:19:15.808307   63714 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:19:16.298790   63714 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:19:16.312662   63714 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:19:16.312876   63714 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:19:16.798431   63714 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:19:16.809954   63714 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:19:16.809983   63714 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:19:17.298504   63714 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:19:17.303816   63714 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:19:17.313125   63714 api_server.go:141] control plane version: v1.31.2
	I1028 18:19:17.313152   63714 api_server.go:131] duration metric: took 4.014883534s to wait for apiserver health ...
	I1028 18:19:17.313162   63714 cni.go:84] Creating CNI manager for ""
	I1028 18:19:17.313170   63714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:19:17.315614   63714 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:19:17.317001   63714 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:19:17.327620   63714 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:19:17.349621   63714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:19:17.349683   63714 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 18:19:17.349698   63714 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 18:19:17.359377   63714 system_pods.go:59] 8 kube-system pods found
	I1028 18:19:17.359404   63714 system_pods.go:61] "coredns-7c65d6cfc9-4xrhk" [451d2891-ee58-4fa2-8136-a6ff34b78dcf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:19:17.359411   63714 system_pods.go:61] "coredns-7c65d6cfc9-qszj7" [38422e07-b226-40d8-a78d-e3abac2dc703] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:19:17.359418   63714 system_pods.go:61] "etcd-kubernetes-upgrade-192352" [a19f60f1-6ed9-43aa-902c-d7be4ee0eb17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:19:17.359425   63714 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-192352" [9f6519bb-e207-4d5f-ad53-0dbdd89f4f21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:19:17.359433   63714 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-192352" [12556493-f832-4f53-9df1-c4be69c3ccf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:19:17.359442   63714 system_pods.go:61] "kube-proxy-zgx88" [7b7c508d-1540-4863-b6f6-97bef33c1881] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:19:17.359449   63714 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-192352" [9c740cdb-416c-4dcf-9fa3-23ccacba6a42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:19:17.359458   63714 system_pods.go:61] "storage-provisioner" [710b8e31-d525-4d6f-95c8-5619a208762c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:19:17.359468   63714 system_pods.go:74] duration metric: took 9.832369ms to wait for pod list to return data ...
	I1028 18:19:17.359478   63714 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:19:17.363072   63714 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:19:17.363102   63714 node_conditions.go:123] node cpu capacity is 2
	I1028 18:19:17.363111   63714 node_conditions.go:105] duration metric: took 3.628418ms to run NodePressure ...
	I1028 18:19:17.363125   63714 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:19:17.684146   63714 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:19:17.700279   63714 ops.go:34] apiserver oom_adj: -16
	I1028 18:19:17.700304   63714 kubeadm.go:597] duration metric: took 29.108868337s to restartPrimaryControlPlane
	I1028 18:19:17.700316   63714 kubeadm.go:394] duration metric: took 29.289057068s to StartCluster
	I1028 18:19:17.700336   63714 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:19:17.700420   63714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:19:17.701650   63714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:19:17.701910   63714 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:19:17.701949   63714 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:19:17.702048   63714 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-192352"
	I1028 18:19:17.702064   63714 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-192352"
	W1028 18:19:17.702073   63714 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:19:17.702103   63714 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-192352"
	I1028 18:19:17.702106   63714 host.go:66] Checking if "kubernetes-upgrade-192352" exists ...
	I1028 18:19:17.702114   63714 config.go:182] Loaded profile config "kubernetes-upgrade-192352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:19:17.702119   63714 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-192352"
	I1028 18:19:17.702534   63714 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:19:17.702575   63714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:19:17.702599   63714 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:19:17.702636   63714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:19:17.703557   63714 out.go:177] * Verifying Kubernetes components...
	I1028 18:19:17.704802   63714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:19:17.722878   63714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I1028 18:19:17.723940   63714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41993
	I1028 18:19:17.724274   63714 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:19:17.724794   63714 main.go:141] libmachine: Using API Version  1
	I1028 18:19:17.724821   63714 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:19:17.725307   63714 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:19:17.725392   63714 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:19:17.725692   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetState
	I1028 18:19:17.726133   63714 main.go:141] libmachine: Using API Version  1
	I1028 18:19:17.726156   63714 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:19:17.726564   63714 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:19:17.727171   63714 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:19:17.727224   63714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:19:17.729575   63714 kapi.go:59] client config for kubernetes-upgrade-192352: &rest.Config{Host:"https://192.168.50.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.crt", KeyFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kubernetes-upgrade-192352/client.key", CAFile:"/home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 18:19:17.729789   63714 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-192352"
	W1028 18:19:17.729797   63714 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:19:17.729817   63714 host.go:66] Checking if "kubernetes-upgrade-192352" exists ...
	I1028 18:19:17.730126   63714 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:19:17.730192   63714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:19:17.756759   63714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I1028 18:19:17.756993   63714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44675
	I1028 18:19:17.757442   63714 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:19:17.757906   63714 main.go:141] libmachine: Using API Version  1
	I1028 18:19:17.757930   63714 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:19:17.758280   63714 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:19:17.758830   63714 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:19:17.758871   63714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:19:17.759893   63714 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:19:17.760312   63714 main.go:141] libmachine: Using API Version  1
	I1028 18:19:17.760346   63714 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:19:17.760695   63714 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:19:17.760857   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetState
	I1028 18:19:17.763748   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:19:17.768531   63714 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:19:17.769818   63714 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:19:17.769836   63714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:19:17.769857   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:19:17.774926   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:19:17.775496   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:17:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:19:17.775516   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:19:17.775653   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:19:17.775801   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:19:17.775941   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:19:17.776116   63714 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa Username:docker}
	I1028 18:19:17.779845   63714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33169
	I1028 18:19:17.780292   63714 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:19:17.780720   63714 main.go:141] libmachine: Using API Version  1
	I1028 18:19:17.780737   63714 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:19:17.781099   63714 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:19:17.781282   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetState
	I1028 18:19:17.782858   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .DriverName
	I1028 18:19:17.783048   63714 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:19:17.783066   63714 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:19:17.783081   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHHostname
	I1028 18:19:17.785912   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:19:17.786265   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:b0:c5", ip: ""} in network mk-kubernetes-upgrade-192352: {Iface:virbr2 ExpiryTime:2024-10-28 19:17:59 +0000 UTC Type:0 Mac:52:54:00:35:b0:c5 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-192352 Clientid:01:52:54:00:35:b0:c5}
	I1028 18:19:17.786312   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | domain kubernetes-upgrade-192352 has defined IP address 192.168.50.62 and MAC address 52:54:00:35:b0:c5 in network mk-kubernetes-upgrade-192352
	I1028 18:19:17.786553   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHPort
	I1028 18:19:17.786815   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHKeyPath
	I1028 18:19:17.787027   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .GetSSHUsername
	I1028 18:19:17.787255   63714 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kubernetes-upgrade-192352/id_rsa Username:docker}
	I1028 18:19:18.024520   63714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:19:18.048812   63714 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:19:18.048913   63714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:19:18.073728   63714 api_server.go:72] duration metric: took 371.779846ms to wait for apiserver process to appear ...
	I1028 18:19:18.073756   63714 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:19:18.073779   63714 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:19:18.081748   63714 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:19:18.082773   63714 api_server.go:141] control plane version: v1.31.2
	I1028 18:19:18.082801   63714 api_server.go:131] duration metric: took 9.036813ms to wait for apiserver health ...
	I1028 18:19:18.082811   63714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:19:18.089835   63714 system_pods.go:59] 8 kube-system pods found
	I1028 18:19:18.089868   63714 system_pods.go:61] "coredns-7c65d6cfc9-4xrhk" [451d2891-ee58-4fa2-8136-a6ff34b78dcf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:19:18.089881   63714 system_pods.go:61] "coredns-7c65d6cfc9-qszj7" [38422e07-b226-40d8-a78d-e3abac2dc703] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:19:18.089892   63714 system_pods.go:61] "etcd-kubernetes-upgrade-192352" [a19f60f1-6ed9-43aa-902c-d7be4ee0eb17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:19:18.089905   63714 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-192352" [9f6519bb-e207-4d5f-ad53-0dbdd89f4f21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:19:18.089921   63714 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-192352" [12556493-f832-4f53-9df1-c4be69c3ccf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:19:18.089930   63714 system_pods.go:61] "kube-proxy-zgx88" [7b7c508d-1540-4863-b6f6-97bef33c1881] Running
	I1028 18:19:18.089938   63714 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-192352" [9c740cdb-416c-4dcf-9fa3-23ccacba6a42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:19:18.089945   63714 system_pods.go:61] "storage-provisioner" [710b8e31-d525-4d6f-95c8-5619a208762c] Running
	I1028 18:19:18.089953   63714 system_pods.go:74] duration metric: took 7.135692ms to wait for pod list to return data ...
	I1028 18:19:18.089968   63714 kubeadm.go:582] duration metric: took 388.025266ms to wait for: map[apiserver:true system_pods:true]
	I1028 18:19:18.089984   63714 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:19:18.094877   63714 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:19:18.094900   63714 node_conditions.go:123] node cpu capacity is 2
	I1028 18:19:18.094911   63714 node_conditions.go:105] duration metric: took 4.922125ms to run NodePressure ...
	I1028 18:19:18.094924   63714 start.go:241] waiting for startup goroutines ...
	I1028 18:19:18.208487   63714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:19:18.223513   63714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:19:18.953449   63714 main.go:141] libmachine: Making call to close driver server
	I1028 18:19:18.953481   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .Close
	I1028 18:19:18.953509   63714 main.go:141] libmachine: Making call to close driver server
	I1028 18:19:18.953527   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .Close
	I1028 18:19:18.953782   63714 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:19:18.953801   63714 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:19:18.953811   63714 main.go:141] libmachine: Making call to close driver server
	I1028 18:19:18.953825   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .Close
	I1028 18:19:18.953919   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Closing plugin on server side
	I1028 18:19:18.953946   63714 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:19:18.953958   63714 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:19:18.953979   63714 main.go:141] libmachine: Making call to close driver server
	I1028 18:19:18.954009   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .Close
	I1028 18:19:18.954039   63714 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:19:18.954051   63714 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:19:18.955503   63714 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:19:18.955518   63714 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:19:18.955579   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Closing plugin on server side
	I1028 18:19:18.960855   63714 main.go:141] libmachine: Making call to close driver server
	I1028 18:19:18.960876   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) Calling .Close
	I1028 18:19:18.961144   63714 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:19:18.961161   63714 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:19:18.961178   63714 main.go:141] libmachine: (kubernetes-upgrade-192352) DBG | Closing plugin on server side
	I1028 18:19:18.962879   63714 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 18:19:18.964098   63714 addons.go:510] duration metric: took 1.262155023s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 18:19:18.964134   63714 start.go:246] waiting for cluster config update ...
	I1028 18:19:18.964145   63714 start.go:255] writing updated cluster config ...
	I1028 18:19:18.964347   63714 ssh_runner.go:195] Run: rm -f paused
	I1028 18:19:19.013122   63714 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:19:19.014817   63714 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-192352" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.740388247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139559740362470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db19ca40-4f8c-4714-8060-fbaf1b3323b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.741054404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e2961bf-6774-49c3-8867-8ecccd13b5e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.741111919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e2961bf-6774-49c3-8867-8ecccd13b5e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.741427847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7d646c17a2e983a2f75a3c7792e0fcf6597790d9a3fbc4a3e4bb3b8c4cf55d,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556581844171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162047514defcb7fc91919842b49fd7866e75585da65af29f3fa41da3ee4ed6b,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556569841207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4xrhk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad5f1ad47de19ae515561a3c80f2da92af58909f66acff2495a2e188f225e745,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAIN
ER_RUNNING,CreatedAt:1730139556555572260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e00d0f3473b23e1f6ee192c289d60cc20f6de636e3f486e17c2e4d5d2a942a,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:173
0139556608097204,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f288182a5a18545d6465b2099d5dde0ed8d0c07be01ecc0740791b8a09372875,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139552778484706,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81dee6b97da1d32130a2ffc775ba19bfdb7dda5d4b4297b8f424e6d06c6f7108,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139552757742294,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d578ac76c4bb835b0f7411bf18637851a08cd34e98268f686e921f749985986c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139552746104572,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99d198ad856e3819928ddb1cb20b4606516f82bafbdd6ad42096bb96758d461,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173013955038087669
9,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60febf5908ac683060d67c83474ec36100c6712bd0966f6ed9b1540ffc235c85,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730139527522287111,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be3a1f54338190792ca8641aea4427b2cc99bade57b4d0ce24d66f738109622,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139527536995170,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394d97b5c653c3070cd03c9c81823d6930434141932db816510da11a7528d21,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139527606446849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.nam
e: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299194ba62a3a96c8d8f72aa3727f36c7f29922ff2d23342e6ef15a7970daae7,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139527517737359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserve
r-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48629f709679bf7de6b3ef442e0f6f0d52f6e4e7bb5b4c1970895fe4b69e214c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139527427949228,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f02ca68915b1390e1af577bc685920670e0f164af679177bf5037afb0a646,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139527369094011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name
: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee235664e090ce911bda301e3c90af94bb73ad7f38aee878b1123aa1558f569e,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527149770887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-4xrhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760687b0ef4da81b223d9ca64b9161766afca6592f7eb5bd2e9c6a09cb2ec6b,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527101812401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e2961bf-6774-49c3-8867-8ecccd13b5e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.792774611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ebcd90d6-a37d-4d47-b0a7-07baa1a5c4f0 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.792880673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebcd90d6-a37d-4d47-b0a7-07baa1a5c4f0 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.793915119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b0587fa-f9ef-4bc2-9487-e8f44bb6ba0d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.794277607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139559794255694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b0587fa-f9ef-4bc2-9487-e8f44bb6ba0d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.794860918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3cc4eb1-74ba-4bb0-a372-bdce1045ab4b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.794921552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3cc4eb1-74ba-4bb0-a372-bdce1045ab4b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.795264233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7d646c17a2e983a2f75a3c7792e0fcf6597790d9a3fbc4a3e4bb3b8c4cf55d,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556581844171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162047514defcb7fc91919842b49fd7866e75585da65af29f3fa41da3ee4ed6b,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556569841207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4xrhk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad5f1ad47de19ae515561a3c80f2da92af58909f66acff2495a2e188f225e745,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAIN
ER_RUNNING,CreatedAt:1730139556555572260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e00d0f3473b23e1f6ee192c289d60cc20f6de636e3f486e17c2e4d5d2a942a,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:173
0139556608097204,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f288182a5a18545d6465b2099d5dde0ed8d0c07be01ecc0740791b8a09372875,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139552778484706,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81dee6b97da1d32130a2ffc775ba19bfdb7dda5d4b4297b8f424e6d06c6f7108,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139552757742294,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d578ac76c4bb835b0f7411bf18637851a08cd34e98268f686e921f749985986c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139552746104572,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99d198ad856e3819928ddb1cb20b4606516f82bafbdd6ad42096bb96758d461,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173013955038087669
9,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60febf5908ac683060d67c83474ec36100c6712bd0966f6ed9b1540ffc235c85,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730139527522287111,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be3a1f54338190792ca8641aea4427b2cc99bade57b4d0ce24d66f738109622,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139527536995170,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394d97b5c653c3070cd03c9c81823d6930434141932db816510da11a7528d21,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139527606446849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.nam
e: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299194ba62a3a96c8d8f72aa3727f36c7f29922ff2d23342e6ef15a7970daae7,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139527517737359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserve
r-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48629f709679bf7de6b3ef442e0f6f0d52f6e4e7bb5b4c1970895fe4b69e214c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139527427949228,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f02ca68915b1390e1af577bc685920670e0f164af679177bf5037afb0a646,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139527369094011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name
: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee235664e090ce911bda301e3c90af94bb73ad7f38aee878b1123aa1558f569e,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527149770887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-4xrhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760687b0ef4da81b223d9ca64b9161766afca6592f7eb5bd2e9c6a09cb2ec6b,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527101812401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3cc4eb1-74ba-4bb0-a372-bdce1045ab4b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.856192794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d49ddcf-7427-4022-aa4d-e5db7501d952 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.857207146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d49ddcf-7427-4022-aa4d-e5db7501d952 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.858884480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19be4596-5d4d-41a9-832a-25cbabcd318e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.859986490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139559859961133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19be4596-5d4d-41a9-832a-25cbabcd318e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.861401741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d800803-9e4f-4485-9877-b3cd3216f6e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.861504282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d800803-9e4f-4485-9877-b3cd3216f6e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.862045720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7d646c17a2e983a2f75a3c7792e0fcf6597790d9a3fbc4a3e4bb3b8c4cf55d,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556581844171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162047514defcb7fc91919842b49fd7866e75585da65af29f3fa41da3ee4ed6b,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556569841207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4xrhk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad5f1ad47de19ae515561a3c80f2da92af58909f66acff2495a2e188f225e745,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAIN
ER_RUNNING,CreatedAt:1730139556555572260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e00d0f3473b23e1f6ee192c289d60cc20f6de636e3f486e17c2e4d5d2a942a,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:173
0139556608097204,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f288182a5a18545d6465b2099d5dde0ed8d0c07be01ecc0740791b8a09372875,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139552778484706,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81dee6b97da1d32130a2ffc775ba19bfdb7dda5d4b4297b8f424e6d06c6f7108,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139552757742294,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d578ac76c4bb835b0f7411bf18637851a08cd34e98268f686e921f749985986c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139552746104572,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99d198ad856e3819928ddb1cb20b4606516f82bafbdd6ad42096bb96758d461,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173013955038087669
9,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60febf5908ac683060d67c83474ec36100c6712bd0966f6ed9b1540ffc235c85,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730139527522287111,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be3a1f54338190792ca8641aea4427b2cc99bade57b4d0ce24d66f738109622,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139527536995170,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394d97b5c653c3070cd03c9c81823d6930434141932db816510da11a7528d21,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139527606446849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.nam
e: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299194ba62a3a96c8d8f72aa3727f36c7f29922ff2d23342e6ef15a7970daae7,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139527517737359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserve
r-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48629f709679bf7de6b3ef442e0f6f0d52f6e4e7bb5b4c1970895fe4b69e214c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139527427949228,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f02ca68915b1390e1af577bc685920670e0f164af679177bf5037afb0a646,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139527369094011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name
: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee235664e090ce911bda301e3c90af94bb73ad7f38aee878b1123aa1558f569e,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527149770887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-4xrhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760687b0ef4da81b223d9ca64b9161766afca6592f7eb5bd2e9c6a09cb2ec6b,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527101812401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d800803-9e4f-4485-9877-b3cd3216f6e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.900271216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e39fae1b-6f1a-45d6-8a21-8eedcb77a2b5 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.900343532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e39fae1b-6f1a-45d6-8a21-8eedcb77a2b5 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.901467090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc09805d-20bc-4d43-b73c-8c57e1ea88de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.901972997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139559901945500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc09805d-20bc-4d43-b73c-8c57e1ea88de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.902476380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db32e6b3-7e99-40dc-b066-f431e317a047 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.902531216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db32e6b3-7e99-40dc-b066-f431e317a047 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:19:19 kubernetes-upgrade-192352 crio[2295]: time="2024-10-28 18:19:19.902978795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7d646c17a2e983a2f75a3c7792e0fcf6597790d9a3fbc4a3e4bb3b8c4cf55d,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556581844171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162047514defcb7fc91919842b49fd7866e75585da65af29f3fa41da3ee4ed6b,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139556569841207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4xrhk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad5f1ad47de19ae515561a3c80f2da92af58909f66acff2495a2e188f225e745,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAIN
ER_RUNNING,CreatedAt:1730139556555572260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e00d0f3473b23e1f6ee192c289d60cc20f6de636e3f486e17c2e4d5d2a942a,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:173
0139556608097204,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f288182a5a18545d6465b2099d5dde0ed8d0c07be01ecc0740791b8a09372875,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139552778484706,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81dee6b97da1d32130a2ffc775ba19bfdb7dda5d4b4297b8f424e6d06c6f7108,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139552757742294,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d578ac76c4bb835b0f7411bf18637851a08cd34e98268f686e921f749985986c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139552746104572,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99d198ad856e3819928ddb1cb20b4606516f82bafbdd6ad42096bb96758d461,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173013955038087669
9,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60febf5908ac683060d67c83474ec36100c6712bd0966f6ed9b1540ffc235c85,PodSandboxId:e43d5bf0064ad04e4f835697ab76aebfdf9ed3fafc463ea3f51ef82926ad3ce9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730139527522287111,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710b8e31-d525-4d6f-95c8-5619a208762c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be3a1f54338190792ca8641aea4427b2cc99bade57b4d0ce24d66f738109622,PodSandboxId:388940e89f5710e7bfca9f624b70ebef6ac04b2f24451f6bcb8619364c7f95c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139527536995170,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgx88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7c508d-1540-4863-b6f6-97bef33c1881,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394d97b5c653c3070cd03c9c81823d6930434141932db816510da11a7528d21,PodSandboxId:cd5a6765c28d1ce2990d5689bf794d0bfe82ba33237f36fe608055698ee65417,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139527606446849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.nam
e: etcd-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7858a5d641bdfbc42429e996d1bb8139,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299194ba62a3a96c8d8f72aa3727f36c7f29922ff2d23342e6ef15a7970daae7,PodSandboxId:9abbae2ca5022f359b02235a07c37a29dcc50695740661dbe9519980358b97c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139527517737359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserve
r-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29e2019207273112cb05d669447e999,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48629f709679bf7de6b3ef442e0f6f0d52f6e4e7bb5b4c1970895fe4b69e214c,PodSandboxId:0e47c139078bb3da37e9fbd150e194ffdd29584f39e808472eb36796c5703e6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139527427949228,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a64698ecaa6da2495b33aaaa9b36e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f02ca68915b1390e1af577bc685920670e0f164af679177bf5037afb0a646,PodSandboxId:7ce436fa285283c4536ee139580e70a8be837a87cc1261ca11a2ccae2e23eebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139527369094011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name
: kube-scheduler-kubernetes-upgrade-192352,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845842a1f02a9a7d242972d4a955aaa3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee235664e090ce911bda301e3c90af94bb73ad7f38aee878b1123aa1558f569e,PodSandboxId:9475d3332108b4b7c5097dee3e13d94b2d9c1e9a0a885e00cb034e998c8377ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527149770887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-4xrhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 451d2891-ee58-4fa2-8136-a6ff34b78dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760687b0ef4da81b223d9ca64b9161766afca6592f7eb5bd2e9c6a09cb2ec6b,PodSandboxId:4eac794e829878b3acff311ec7f1b5113c13752b48508431dd1fb60e183ae0e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139527101812401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qszj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38422e07-b226-40d8-a78d-e3abac2dc703,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db32e6b3-7e99-40dc-b066-f431e317a047 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2e00d0f3473b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   e43d5bf0064ad       storage-provisioner
	6a7d646c17a2e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   4eac794e82987       coredns-7c65d6cfc9-qszj7
	162047514defc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   9475d3332108b       coredns-7c65d6cfc9-4xrhk
	ad5f1ad47de19       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   3 seconds ago       Running             kube-proxy                2                   388940e89f571       kube-proxy-zgx88
	f288182a5a185       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   7 seconds ago       Running             kube-apiserver            2                   9abbae2ca5022       kube-apiserver-kubernetes-upgrade-192352
	81dee6b97da1d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   7 seconds ago       Running             kube-scheduler            2                   7ce436fa28528       kube-scheduler-kubernetes-upgrade-192352
	d578ac76c4bb8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   7 seconds ago       Running             kube-controller-manager   2                   0e47c139078bb       kube-controller-manager-kubernetes-upgrade-192352
	f99d198ad856e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 seconds ago       Running             etcd                      2                   cd5a6765c28d1       etcd-kubernetes-upgrade-192352
	c394d97b5c653       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   32 seconds ago      Exited              etcd                      1                   cd5a6765c28d1       etcd-kubernetes-upgrade-192352
	4be3a1f543381       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   32 seconds ago      Exited              kube-proxy                1                   388940e89f571       kube-proxy-zgx88
	60febf5908ac6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   32 seconds ago      Exited              storage-provisioner       1                   e43d5bf0064ad       storage-provisioner
	299194ba62a3a       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   32 seconds ago      Exited              kube-apiserver            1                   9abbae2ca5022       kube-apiserver-kubernetes-upgrade-192352
	48629f709679b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   32 seconds ago      Exited              kube-controller-manager   1                   0e47c139078bb       kube-controller-manager-kubernetes-upgrade-192352
	eb6f02ca68915       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   32 seconds ago      Exited              kube-scheduler            1                   7ce436fa28528       kube-scheduler-kubernetes-upgrade-192352
	ee235664e090c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   32 seconds ago      Exited              coredns                   1                   9475d3332108b       coredns-7c65d6cfc9-4xrhk
	5760687b0ef4d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   32 seconds ago      Exited              coredns                   1                   4eac794e82987       coredns-7c65d6cfc9-qszj7
	
	
	==> coredns [162047514defcb7fc91919842b49fd7866e75585da65af29f3fa41da3ee4ed6b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [5760687b0ef4da81b223d9ca64b9161766afca6592f7eb5bd2e9c6a09cb2ec6b] <==
	Trace[662303347]: [10.001520789s] [10.001520789s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1538018451]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 18:18:48.245) (total time: 10001ms):
	Trace[1538018451]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:18:58.246)
	Trace[1538018451]: [10.001004961s] [10.001004961s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[45819245]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 18:18:48.770) (total time: 10000ms):
	Trace[45819245]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:18:58.771)
	Trace[45819245]: [10.000750822s] [10.000750822s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a7d646c17a2e983a2f75a3c7792e0fcf6597790d9a3fbc4a3e4bb3b8c4cf55d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ee235664e090ce911bda301e3c90af94bb73ad7f38aee878b1123aa1558f569e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1341923466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 18:18:48.373) (total time: 10007ms):
	Trace[1341923466]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10007ms (18:18:58.381)
	Trace[1341923466]: [10.007145304s] [10.007145304s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1314965811]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 18:18:48.541) (total time: 10002ms):
	Trace[1314965811]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:18:58.543)
	Trace[1314965811]: [10.002186485s] [10.002186485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1793449447]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 18:18:48.709) (total time: 10000ms):
	Trace[1793449447]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:18:58.710)
	Trace[1793449447]: [10.000871193s] [10.000871193s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-192352
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-192352
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-192352
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:19:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:19:15 +0000   Mon, 28 Oct 2024 18:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:19:15 +0000   Mon, 28 Oct 2024 18:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:19:15 +0000   Mon, 28 Oct 2024 18:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:19:15 +0000   Mon, 28 Oct 2024 18:18:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.62
	  Hostname:    kubernetes-upgrade-192352
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5123a374816c443c96db6c2f9828d013
	  System UUID:                5123a374-816c-443c-96db-6c2f9828d013
	  Boot ID:                    af796ec6-d0ab-4bd2-9b1c-6ccc16b6d5e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4xrhk                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     51s
	  kube-system                 coredns-7c65d6cfc9-qszj7                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     51s
	  kube-system                 etcd-kubernetes-upgrade-192352                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         53s
	  kube-system                 kube-apiserver-kubernetes-upgrade-192352             250m (12%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-192352    200m (10%)    0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-proxy-zgx88                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-scheduler-kubernetes-upgrade-192352             100m (5%)     0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-192352 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-192352 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 64s)  kubelet          Node kubernetes-upgrade-192352 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           52s                node-controller  Node kubernetes-upgrade-192352 event: Registered Node kubernetes-upgrade-192352 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-192352 event: Registered Node kubernetes-upgrade-192352 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 18:18] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.057852] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061973] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.188217] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.111639] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.299106] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +4.679391] systemd-fstab-generator[724]: Ignoring "noauto" option for root device
	[  +0.102913] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.614149] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[ +10.119846] systemd-fstab-generator[1246]: Ignoring "noauto" option for root device
	[  +0.098003] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.319857] kauditd_printk_skb: 103 callbacks suppressed
	[ +11.018055] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.136196] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +0.170180] systemd-fstab-generator[2245]: Ignoring "noauto" option for root device
	[  +0.141014] systemd-fstab-generator[2257]: Ignoring "noauto" option for root device
	[  +0.305775] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +3.905107] systemd-fstab-generator[2942]: Ignoring "noauto" option for root device
	[  +0.467615] kauditd_printk_skb: 208 callbacks suppressed
	[ +11.916600] kauditd_printk_skb: 13 callbacks suppressed
	[Oct28 18:19] systemd-fstab-generator[3548]: Ignoring "noauto" option for root device
	[  +5.315237] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.536431] systemd-fstab-generator[4075]: Ignoring "noauto" option for root device
	
	
	==> etcd [c394d97b5c653c3070cd03c9c81823d6930434141932db816510da11a7528d21] <==
	{"level":"info","ts":"2024-10-28T18:18:48.402550Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-28T18:18:48.433843Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","commit-index":390}
	{"level":"info","ts":"2024-10-28T18:18:48.453374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-28T18:18:48.453548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became follower at term 2"}
	{"level":"info","ts":"2024-10-28T18:18:48.453564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 48d332b29d0cdf97 [peers: [], term: 2, commit: 390, applied: 0, lastindex: 390, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-28T18:18:48.459334Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-28T18:18:48.522887Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":382}
	{"level":"info","ts":"2024-10-28T18:18:48.581745Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-28T18:18:48.590377Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"48d332b29d0cdf97","timeout":"7s"}
	{"level":"info","ts":"2024-10-28T18:18:48.590689Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"48d332b29d0cdf97"}
	{"level":"info","ts":"2024-10-28T18:18:48.590748Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"48d332b29d0cdf97","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-28T18:18:48.592573Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-28T18:18:48.592773Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-28T18:18:48.592828Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-28T18:18:48.592838Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-28T18:18:48.601900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 switched to configuration voters=(5247593733537193879)"}
	{"level":"info","ts":"2024-10-28T18:18:48.601976Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","added-peer-id":"48d332b29d0cdf97","added-peer-peer-urls":["https://192.168.50.62:2380"]}
	{"level":"info","ts":"2024-10-28T18:18:48.602071Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:18:48.602116Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:18:48.623611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:18:48.635088Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T18:18:48.640743Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-10-28T18:18:48.640974Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-10-28T18:18:48.652805Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T18:18:48.652745Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"48d332b29d0cdf97","initial-advertise-peer-urls":["https://192.168.50.62:2380"],"listen-peer-urls":["https://192.168.50.62:2380"],"advertise-client-urls":["https://192.168.50.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> etcd [f99d198ad856e3819928ddb1cb20b4606516f82bafbdd6ad42096bb96758d461] <==
	{"level":"info","ts":"2024-10-28T18:19:13.212887Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","added-peer-id":"48d332b29d0cdf97","added-peer-peer-urls":["https://192.168.50.62:2380"]}
	{"level":"info","ts":"2024-10-28T18:19:13.213440Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:19:13.215715Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:19:13.227900Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:19:13.236574Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T18:19:13.238666Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-10-28T18:19:13.238810Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-10-28T18:19:13.239789Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"48d332b29d0cdf97","initial-advertise-peer-urls":["https://192.168.50.62:2380"],"listen-peer-urls":["https://192.168.50.62:2380"],"advertise-client-urls":["https://192.168.50.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T18:19:13.241678Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T18:19:14.207764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-28T18:19:14.207865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T18:19:14.207899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 received MsgPreVoteResp from 48d332b29d0cdf97 at term 2"}
	{"level":"info","ts":"2024-10-28T18:19:14.207928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T18:19:14.207952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 received MsgVoteResp from 48d332b29d0cdf97 at term 3"}
	{"level":"info","ts":"2024-10-28T18:19:14.207995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became leader at term 3"}
	{"level":"info","ts":"2024-10-28T18:19:14.208028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 48d332b29d0cdf97 elected leader 48d332b29d0cdf97 at term 3"}
	{"level":"info","ts":"2024-10-28T18:19:14.212351Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"48d332b29d0cdf97","local-member-attributes":"{Name:kubernetes-upgrade-192352 ClientURLs:[https://192.168.50.62:2379]}","request-path":"/0/members/48d332b29d0cdf97/attributes","cluster-id":"4f4301e400b1ef13","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T18:19:14.212456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:19:14.213038Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:19:14.213803Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:19:14.214885Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T18:19:14.217517Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:19:14.222798Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.62:2379"}
	{"level":"info","ts":"2024-10-28T18:19:14.225489Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:19:14.225533Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:19:20 up 1 min,  0 users,  load average: 0.67, 0.24, 0.09
	Linux kubernetes-upgrade-192352 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [299194ba62a3a96c8d8f72aa3727f36c7f29922ff2d23342e6ef15a7970daae7] <==
	I1028 18:18:48.108023       1 options.go:228] external host was not specified, using 192.168.50.62
	I1028 18:18:48.113421       1 server.go:142] Version: v1.31.2
	I1028 18:18:48.113611       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1028 18:18:48.827361       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:48.827798       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1028 18:18:48.827906       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1028 18:18:48.830075       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 18:18:48.843003       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1028 18:18:48.843045       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1028 18:18:48.843254       1 instance.go:232] Using reconciler: lease
	W1028 18:18:48.850995       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:49.828446       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:49.828493       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:49.851764       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:51.176489       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:51.266691       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:51.625347       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:53.688541       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:53.891066       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:54.194346       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:57.402130       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:57.889159       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:18:58.175877       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f288182a5a18545d6465b2099d5dde0ed8d0c07be01ecc0740791b8a09372875] <==
	I1028 18:19:15.597574       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 18:19:15.597778       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 18:19:15.598990       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1028 18:19:15.601237       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 18:19:15.601284       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 18:19:15.601377       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 18:19:15.601590       1 aggregator.go:171] initial CRD sync complete...
	I1028 18:19:15.601694       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 18:19:15.601720       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 18:19:15.601741       1 cache.go:39] Caches are synced for autoregister controller
	I1028 18:19:15.602841       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 18:19:15.606765       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E1028 18:19:15.616824       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1028 18:19:15.658085       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 18:19:15.683717       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 18:19:15.683820       1 policy_source.go:224] refreshing policies
	I1028 18:19:15.695244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 18:19:16.503010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 18:19:16.877808       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 18:19:17.482057       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 18:19:17.501332       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 18:19:17.555986       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 18:19:17.639170       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 18:19:17.651799       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 18:19:19.134917       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [48629f709679bf7de6b3ef442e0f6f0d52f6e4e7bb5b4c1970895fe4b69e214c] <==
	I1028 18:18:48.937047       1 serving.go:386] Generated self-signed cert in-memory
	I1028 18:18:49.415851       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1028 18:18:49.415929       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:18:49.417400       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1028 18:18:49.418052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 18:18:49.418171       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1028 18:18:49.418172       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [d578ac76c4bb835b0f7411bf18637851a08cd34e98268f686e921f749985986c] <==
	I1028 18:19:19.104498       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 18:19:19.118198       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 18:19:19.118306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-192352"
	I1028 18:19:19.122282       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 18:19:19.124960       1 shared_informer.go:320] Caches are synced for PVC protection
	I1028 18:19:19.127197       1 shared_informer.go:320] Caches are synced for taint
	I1028 18:19:19.127272       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1028 18:19:19.127441       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1028 18:19:19.127544       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 18:19:19.127696       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-192352"
	I1028 18:19:19.127734       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 18:19:19.129865       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 18:19:19.135869       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 18:19:19.140188       1 shared_informer.go:320] Caches are synced for job
	I1028 18:19:19.158461       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1028 18:19:19.165713       1 shared_informer.go:320] Caches are synced for endpoint
	I1028 18:19:19.172470       1 shared_informer.go:320] Caches are synced for ephemeral
	I1028 18:19:19.172990       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 18:19:19.174837       1 shared_informer.go:320] Caches are synced for deployment
	I1028 18:19:19.176065       1 shared_informer.go:320] Caches are synced for GC
	I1028 18:19:19.181330       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1028 18:19:19.181961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="158.944µs"
	I1028 18:19:19.572214       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 18:19:19.587856       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 18:19:19.587898       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [4be3a1f54338190792ca8641aea4427b2cc99bade57b4d0ce24d66f738109622] <==
	
	
	==> kube-proxy [ad5f1ad47de19ae515561a3c80f2da92af58909f66acff2495a2e188f225e745] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:19:17.053782       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:19:17.090137       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.62"]
	E1028 18:19:17.091505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:19:17.126785       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:19:17.126817       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:19:17.126848       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:19:17.129495       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:19:17.130432       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:19:17.130477       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:19:17.133191       1 config.go:199] "Starting service config controller"
	I1028 18:19:17.133490       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:19:17.133783       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:19:17.133819       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:19:17.134422       1 config.go:328] "Starting node config controller"
	I1028 18:19:17.135704       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:19:17.234372       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 18:19:17.234379       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:19:17.235864       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [81dee6b97da1d32130a2ffc775ba19bfdb7dda5d4b4297b8f424e6d06c6f7108] <==
	I1028 18:19:13.790371       1 serving.go:386] Generated self-signed cert in-memory
	W1028 18:19:15.581710       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 18:19:15.581847       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 18:19:15.581877       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 18:19:15.581958       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 18:19:15.619473       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 18:19:15.621143       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:19:15.623612       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 18:19:15.623832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 18:19:15.623858       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 18:19:15.627699       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 18:19:15.728853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [eb6f02ca68915b1390e1af577bc685920670e0f164af679177bf5037afb0a646] <==
	I1028 18:18:49.213495       1 serving.go:386] Generated self-signed cert in-memory
	W1028 18:19:00.163785       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.50.62:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.62:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.62:46350->192.168.50.62:8443: read: connection reset by peer
	W1028 18:19:00.163833       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 18:19:00.163843       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 18:19:00.174213       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 18:19:00.174256       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1028 18:19:00.174277       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1028 18:19:00.177479       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 18:19:00.177511       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 18:19:00.177551       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I1028 18:19:00.177561       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1028 18:19:00.177610       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I1028 18:19:00.177925       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1028 18:19:00.177932       1 run.go:72] "command failed" err="finished without leader elect"
	I1028 18:19:00.177940       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	
	
	==> kubelet <==
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:12.449606    3555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1a64698ecaa6da2495b33aaaa9b36e6-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-192352\" (UID: \"b1a64698ecaa6da2495b33aaaa9b36e6\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-192352"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:12.449663    3555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/845842a1f02a9a7d242972d4a955aaa3-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-192352\" (UID: \"845842a1f02a9a7d242972d4a955aaa3\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-192352"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:12.616508    3555 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-192352"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: E1028 18:19:12.617479    3555 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.62:8443: connect: connection refused" node="kubernetes-upgrade-192352"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:12.723970    3555 scope.go:117] "RemoveContainer" containerID="eb6f02ca68915b1390e1af577bc685920670e0f164af679177bf5037afb0a646"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:12.724590    3555 scope.go:117] "RemoveContainer" containerID="299194ba62a3a96c8d8f72aa3727f36c7f29922ff2d23342e6ef15a7970daae7"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:12.725189    3555 scope.go:117] "RemoveContainer" containerID="c394d97b5c653c3070cd03c9c81823d6930434141932db816510da11a7528d21"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:12.725447    3555 scope.go:117] "RemoveContainer" containerID="48629f709679bf7de6b3ef442e0f6f0d52f6e4e7bb5b4c1970895fe4b69e214c"
	Oct 28 18:19:12 kubernetes-upgrade-192352 kubelet[3555]: E1028 18:19:12.849783    3555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-192352?timeout=10s\": dial tcp 192.168.50.62:8443: connect: connection refused" interval="800ms"
	Oct 28 18:19:13 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:13.019093    3555 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-192352"
	Oct 28 18:19:13 kubernetes-upgrade-192352 kubelet[3555]: E1028 18:19:13.019995    3555 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.62:8443: connect: connection refused" node="kubernetes-upgrade-192352"
	Oct 28 18:19:13 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:13.821956    3555 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-192352"
	Oct 28 18:19:15 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:15.765997    3555 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-192352"
	Oct 28 18:19:15 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:15.766218    3555 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-192352"
	Oct 28 18:19:15 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:15.766299    3555 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 18:19:15 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:15.767803    3555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.211712    3555 apiserver.go:52] "Watching apiserver"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.238237    3555 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.297275    3555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/710b8e31-d525-4d6f-95c8-5619a208762c-tmp\") pod \"storage-provisioner\" (UID: \"710b8e31-d525-4d6f-95c8-5619a208762c\") " pod="kube-system/storage-provisioner"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.297486    3555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b7c508d-1540-4863-b6f6-97bef33c1881-lib-modules\") pod \"kube-proxy-zgx88\" (UID: \"7b7c508d-1540-4863-b6f6-97bef33c1881\") " pod="kube-system/kube-proxy-zgx88"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.297575    3555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b7c508d-1540-4863-b6f6-97bef33c1881-xtables-lock\") pod \"kube-proxy-zgx88\" (UID: \"7b7c508d-1540-4863-b6f6-97bef33c1881\") " pod="kube-system/kube-proxy-zgx88"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.518115    3555 scope.go:117] "RemoveContainer" containerID="4be3a1f54338190792ca8641aea4427b2cc99bade57b4d0ce24d66f738109622"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.519915    3555 scope.go:117] "RemoveContainer" containerID="5760687b0ef4da81b223d9ca64b9161766afca6592f7eb5bd2e9c6a09cb2ec6b"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.520334    3555 scope.go:117] "RemoveContainer" containerID="60febf5908ac683060d67c83474ec36100c6712bd0966f6ed9b1540ffc235c85"
	Oct 28 18:19:16 kubernetes-upgrade-192352 kubelet[3555]: I1028 18:19:16.527051    3555 scope.go:117] "RemoveContainer" containerID="ee235664e090ce911bda301e3c90af94bb73ad7f38aee878b1123aa1558f569e"
	
	
	==> storage-provisioner [60febf5908ac683060d67c83474ec36100c6712bd0966f6ed9b1540ffc235c85] <==
	I1028 18:18:48.451148       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [c2e00d0f3473b23e1f6ee192c289d60cc20f6de636e3f486e17c2e4d5d2a942a] <==
	I1028 18:19:16.847690       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 18:19:16.861538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 18:19:16.861582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 18:19:16.889467       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 18:19:16.891787       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c8c66a8-4ba1-4fdb-9ec4-943b7702b4c9", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-192352_e3c138d0-b215-4e0a-8903-8d085f311cbd became leader
	I1028 18:19:16.894705       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-192352_e3c138d0-b215-4e0a-8903-8d085f311cbd!
	I1028 18:19:16.996872       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-192352_e3c138d0-b215-4e0a-8903-8d085f311cbd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-192352 -n kubernetes-upgrade-192352
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-192352 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-192352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-192352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-192352: (1.093361899s)
--- FAIL: TestKubernetesUpgrade (427.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-006166 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-006166 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.916091359s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-006166] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-006166" primary control-plane node in "pause-006166" cluster
	* Updating the running kvm2 "pause-006166" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-006166" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:14:02.395814   56823 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:14:02.395994   56823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:14:02.396008   56823 out.go:358] Setting ErrFile to fd 2...
	I1028 18:14:02.396015   56823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:14:02.396274   56823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:14:02.397063   56823 out.go:352] Setting JSON to false
	I1028 18:14:02.398422   56823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6985,"bootTime":1730132257,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:14:02.398508   56823 start.go:139] virtualization: kvm guest
	I1028 18:14:02.400532   56823 out.go:177] * [pause-006166] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:14:02.401768   56823 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:14:02.401813   56823 notify.go:220] Checking for updates...
	I1028 18:14:02.404058   56823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:14:02.405168   56823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:14:02.410089   56823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:14:02.411440   56823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:14:02.412742   56823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:14:02.414603   56823 config.go:182] Loaded profile config "pause-006166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:14:02.415193   56823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:02.415251   56823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:02.432110   56823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I1028 18:14:02.432563   56823 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:02.433177   56823 main.go:141] libmachine: Using API Version  1
	I1028 18:14:02.433208   56823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:02.433641   56823 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:02.433943   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:02.434196   56823 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:14:02.434602   56823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:02.434646   56823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:02.452997   56823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I1028 18:14:02.453384   56823 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:02.453890   56823 main.go:141] libmachine: Using API Version  1
	I1028 18:14:02.453912   56823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:02.454255   56823 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:02.454473   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:02.498277   56823 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:14:02.499431   56823 start.go:297] selected driver: kvm2
	I1028 18:14:02.499448   56823 start.go:901] validating driver "kvm2" against &{Name:pause-006166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-006166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:14:02.499644   56823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:14:02.500051   56823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:14:02.500172   56823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:14:02.519481   56823 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:14:02.520522   56823 cni.go:84] Creating CNI manager for ""
	I1028 18:14:02.520597   56823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:14:02.520674   56823 start.go:340] cluster config:
	{Name:pause-006166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-006166 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:14:02.520865   56823 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:14:02.523181   56823 out.go:177] * Starting "pause-006166" primary control-plane node in "pause-006166" cluster
	I1028 18:14:02.524320   56823 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:14:02.524370   56823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:14:02.524382   56823 cache.go:56] Caching tarball of preloaded images
	I1028 18:14:02.524488   56823 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:14:02.524503   56823 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:14:02.524658   56823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/config.json ...
	I1028 18:14:02.524889   56823 start.go:360] acquireMachinesLock for pause-006166: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:14:22.573302   56823 start.go:364] duration metric: took 20.04837431s to acquireMachinesLock for "pause-006166"
	I1028 18:14:22.573353   56823 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:14:22.573362   56823 fix.go:54] fixHost starting: 
	I1028 18:14:22.573786   56823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:22.573832   56823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:22.592194   56823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I1028 18:14:22.592635   56823 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:22.593190   56823 main.go:141] libmachine: Using API Version  1
	I1028 18:14:22.593213   56823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:22.593528   56823 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:22.593712   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:22.593863   56823 main.go:141] libmachine: (pause-006166) Calling .GetState
	I1028 18:14:22.595407   56823 fix.go:112] recreateIfNeeded on pause-006166: state=Running err=<nil>
	W1028 18:14:22.595426   56823 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:14:22.597273   56823 out.go:177] * Updating the running kvm2 "pause-006166" VM ...
	I1028 18:14:22.598530   56823 machine.go:93] provisionDockerMachine start ...
	I1028 18:14:22.598545   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:22.598712   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:22.601008   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.601444   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:22.601459   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.601661   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:22.601808   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:22.601974   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:22.602136   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:22.602272   56823 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:22.602485   56823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1028 18:14:22.602495   56823 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:14:22.714858   56823 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-006166
	
	I1028 18:14:22.714888   56823 main.go:141] libmachine: (pause-006166) Calling .GetMachineName
	I1028 18:14:22.715153   56823 buildroot.go:166] provisioning hostname "pause-006166"
	I1028 18:14:22.715183   56823 main.go:141] libmachine: (pause-006166) Calling .GetMachineName
	I1028 18:14:22.715379   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:22.718385   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.718762   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:22.718797   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.718962   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:22.719167   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:22.719322   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:22.719459   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:22.719670   56823 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:22.719874   56823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1028 18:14:22.719890   56823 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-006166 && echo "pause-006166" | sudo tee /etc/hostname
	I1028 18:14:22.847644   56823 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-006166
	
	I1028 18:14:22.847670   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:22.850216   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.850562   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:22.850602   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.850745   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:22.850912   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:22.851059   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:22.851213   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:22.851394   56823 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:22.851579   56823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1028 18:14:22.851601   56823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-006166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-006166/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-006166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:14:22.966005   56823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:14:22.966052   56823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:14:22.966086   56823 buildroot.go:174] setting up certificates
	I1028 18:14:22.966101   56823 provision.go:84] configureAuth start
	I1028 18:14:22.966124   56823 main.go:141] libmachine: (pause-006166) Calling .GetMachineName
	I1028 18:14:22.966379   56823 main.go:141] libmachine: (pause-006166) Calling .GetIP
	I1028 18:14:22.969414   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.969752   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:22.969779   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.969954   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:22.972549   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.972860   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:22.972884   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:22.973071   56823 provision.go:143] copyHostCerts
	I1028 18:14:22.973138   56823 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:14:22.973149   56823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:14:22.973209   56823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:14:22.973315   56823 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:14:22.973325   56823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:14:22.973352   56823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:14:22.973426   56823 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:14:22.973436   56823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:14:22.973464   56823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:14:22.973535   56823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.pause-006166 san=[127.0.0.1 192.168.61.105 localhost minikube pause-006166]
	I1028 18:14:23.095676   56823 provision.go:177] copyRemoteCerts
	I1028 18:14:23.095729   56823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:14:23.095753   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:23.098669   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:23.099007   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:23.099044   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:23.099283   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:23.099481   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:23.099631   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:23.099805   56823 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/pause-006166/id_rsa Username:docker}
	I1028 18:14:23.191607   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:14:23.218144   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1028 18:14:23.250400   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:14:23.282866   56823 provision.go:87] duration metric: took 316.752619ms to configureAuth
	I1028 18:14:23.282896   56823 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:14:23.283160   56823 config.go:182] Loaded profile config "pause-006166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:14:23.283257   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:23.286333   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:23.286806   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:23.286843   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:23.287034   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:23.287275   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:23.287460   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:23.287607   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:23.287861   56823 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:23.288044   56823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1028 18:14:23.288061   56823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:14:28.793824   56823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:14:28.793854   56823 machine.go:96] duration metric: took 6.195311597s to provisionDockerMachine
	I1028 18:14:28.793893   56823 start.go:293] postStartSetup for "pause-006166" (driver="kvm2")
	I1028 18:14:28.793913   56823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:14:28.793941   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:28.794300   56823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:14:28.794334   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:28.796950   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:28.797324   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:28.797359   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:28.797529   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:28.797694   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:28.797834   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:28.797941   56823 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/pause-006166/id_rsa Username:docker}
	I1028 18:14:28.878450   56823 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:14:28.882481   56823 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:14:28.882504   56823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:14:28.882559   56823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:14:28.882626   56823 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:14:28.882718   56823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:14:28.892091   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:14:28.917803   56823 start.go:296] duration metric: took 123.888783ms for postStartSetup
	I1028 18:14:28.917856   56823 fix.go:56] duration metric: took 6.344493413s for fixHost
	I1028 18:14:28.917883   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:28.920980   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:28.921364   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:28.921393   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:28.921611   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:28.921828   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:28.922024   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:28.922193   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:28.922365   56823 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:28.922535   56823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1028 18:14:28.922545   56823 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:14:29.038531   56823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730139269.027329206
	
	I1028 18:14:29.038556   56823 fix.go:216] guest clock: 1730139269.027329206
	I1028 18:14:29.038567   56823 fix.go:229] Guest: 2024-10-28 18:14:29.027329206 +0000 UTC Remote: 2024-10-28 18:14:28.917861501 +0000 UTC m=+26.566310322 (delta=109.467705ms)
	I1028 18:14:29.038590   56823 fix.go:200] guest clock delta is within tolerance: 109.467705ms
	I1028 18:14:29.038597   56823 start.go:83] releasing machines lock for "pause-006166", held for 6.465264831s
	I1028 18:14:29.038623   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:29.038864   56823 main.go:141] libmachine: (pause-006166) Calling .GetIP
	I1028 18:14:29.041895   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:29.042295   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:29.042321   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:29.042526   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:29.042971   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:29.043164   56823 main.go:141] libmachine: (pause-006166) Calling .DriverName
	I1028 18:14:29.043276   56823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:14:29.043329   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:29.043378   56823 ssh_runner.go:195] Run: cat /version.json
	I1028 18:14:29.043401   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHHostname
	I1028 18:14:29.046119   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:29.046208   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:29.046503   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:29.046536   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:29.046567   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:29.046585   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:29.046803   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:29.046824   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHPort
	I1028 18:14:29.046983   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:29.046989   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHKeyPath
	I1028 18:14:29.047141   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:29.047152   56823 main.go:141] libmachine: (pause-006166) Calling .GetSSHUsername
	I1028 18:14:29.047274   56823 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/pause-006166/id_rsa Username:docker}
	I1028 18:14:29.047288   56823 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/pause-006166/id_rsa Username:docker}
	I1028 18:14:29.148535   56823 ssh_runner.go:195] Run: systemctl --version
	I1028 18:14:29.177863   56823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:14:29.391321   56823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:14:29.401077   56823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:14:29.401138   56823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:14:29.416081   56823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 18:14:29.416108   56823 start.go:495] detecting cgroup driver to use...
	I1028 18:14:29.416172   56823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:14:29.447260   56823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:14:29.467765   56823 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:14:29.467820   56823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:14:29.483062   56823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:14:29.505763   56823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:14:29.653939   56823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:14:29.817319   56823 docker.go:233] disabling docker service ...
	I1028 18:14:29.817412   56823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:14:29.844676   56823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:14:29.861968   56823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:14:30.067346   56823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:14:30.296876   56823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:14:30.323572   56823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:14:30.350963   56823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:14:30.351028   56823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:30.363407   56823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:14:30.363491   56823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:30.376768   56823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:30.388079   56823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:30.412674   56823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:14:30.430084   56823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:30.452458   56823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:30.474331   56823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:30.498748   56823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:14:30.520020   56823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:14:30.538429   56823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:14:30.869407   56823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:14:31.484402   56823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:14:31.484491   56823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:14:31.490979   56823 start.go:563] Will wait 60s for crictl version
	I1028 18:14:31.491039   56823 ssh_runner.go:195] Run: which crictl
	I1028 18:14:31.495612   56823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:14:31.538111   56823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:14:31.538216   56823 ssh_runner.go:195] Run: crio --version
	I1028 18:14:31.572993   56823 ssh_runner.go:195] Run: crio --version
	I1028 18:14:31.606164   56823 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:14:31.607468   56823 main.go:141] libmachine: (pause-006166) Calling .GetIP
	I1028 18:14:31.610535   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:31.610829   56823 main.go:141] libmachine: (pause-006166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:66:e3", ip: ""} in network mk-pause-006166: {Iface:virbr3 ExpiryTime:2024-10-28 19:13:24 +0000 UTC Type:0 Mac:52:54:00:48:66:e3 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-006166 Clientid:01:52:54:00:48:66:e3}
	I1028 18:14:31.610862   56823 main.go:141] libmachine: (pause-006166) DBG | domain pause-006166 has defined IP address 192.168.61.105 and MAC address 52:54:00:48:66:e3 in network mk-pause-006166
	I1028 18:14:31.611067   56823 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:14:31.615517   56823 kubeadm.go:883] updating cluster {Name:pause-006166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-006166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:14:31.615688   56823 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:14:31.615747   56823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:14:31.660567   56823 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:14:31.660595   56823 crio.go:433] Images already preloaded, skipping extraction
	I1028 18:14:31.660661   56823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:14:31.703914   56823 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:14:31.703939   56823 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:14:31.703948   56823 kubeadm.go:934] updating node { 192.168.61.105 8443 v1.31.2 crio true true} ...
	I1028 18:14:31.704059   56823 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-006166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-006166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:14:31.704137   56823 ssh_runner.go:195] Run: crio config
	I1028 18:14:31.754536   56823 cni.go:84] Creating CNI manager for ""
	I1028 18:14:31.754559   56823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:14:31.754569   56823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:14:31.754591   56823 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-006166 NodeName:pause-006166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:14:31.754722   56823 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-006166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:14:31.754778   56823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:14:31.767477   56823 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:14:31.767545   56823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:14:31.776863   56823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 18:14:31.795465   56823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:14:31.813062   56823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1028 18:14:31.831470   56823 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1028 18:14:31.835711   56823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:14:31.991569   56823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:14:32.009533   56823 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166 for IP: 192.168.61.105
	I1028 18:14:32.009572   56823 certs.go:194] generating shared ca certs ...
	I1028 18:14:32.009591   56823 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:14:32.009746   56823 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:14:32.009803   56823 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:14:32.009816   56823 certs.go:256] generating profile certs ...
	I1028 18:14:32.009947   56823 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/client.key
	I1028 18:14:32.010039   56823 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/apiserver.key.8f130092
	I1028 18:14:32.010092   56823 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/proxy-client.key
	I1028 18:14:32.010236   56823 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:14:32.010296   56823 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:14:32.010310   56823 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:14:32.010346   56823 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:14:32.010381   56823 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:14:32.010416   56823 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:14:32.010465   56823 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:14:32.011284   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:14:32.038003   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:14:32.063963   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:14:32.094231   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:14:32.123978   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 18:14:32.151240   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:14:32.179559   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:14:32.206148   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/pause-006166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:14:32.374452   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:14:32.437054   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:14:32.488562   56823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:14:32.519213   56823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:14:32.536921   56823 ssh_runner.go:195] Run: openssl version
	I1028 18:14:32.561686   56823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:14:32.589724   56823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:14:32.629841   56823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:14:32.629917   56823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:14:32.700344   56823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:14:32.764225   56823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:14:32.839478   56823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:14:32.871396   56823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:14:32.871557   56823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:14:32.913437   56823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:14:32.950657   56823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:14:32.971641   56823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:14:32.980535   56823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:14:32.980620   56823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:14:32.992179   56823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:14:33.004126   56823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:14:33.010600   56823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:14:33.017863   56823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:14:33.027963   56823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:14:33.039241   56823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:14:33.064646   56823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:14:33.077459   56823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:14:33.086235   56823 kubeadm.go:392] StartCluster: {Name:pause-006166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-006166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:14:33.086418   56823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:14:33.086483   56823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:14:33.143995   56823 cri.go:89] found id: "d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76"
	I1028 18:14:33.144021   56823 cri.go:89] found id: "e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8"
	I1028 18:14:33.144051   56823 cri.go:89] found id: "a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9"
	I1028 18:14:33.144057   56823 cri.go:89] found id: "c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9"
	I1028 18:14:33.144062   56823 cri.go:89] found id: "8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c"
	I1028 18:14:33.144067   56823 cri.go:89] found id: "ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7"
	I1028 18:14:33.144071   56823 cri.go:89] found id: ""
	I1028 18:14:33.144134   56823 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-006166 -n pause-006166
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-006166 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-006166 logs -n 25: (1.326111169s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| image   | test-preload-598338 image list | test-preload-598338       | jenkins | v1.34.0 | 28 Oct 24 18:10 UTC | 28 Oct 24 18:10 UTC |
	| delete  | -p test-preload-598338         | test-preload-598338       | jenkins | v1.34.0 | 28 Oct 24 18:10 UTC | 28 Oct 24 18:10 UTC |
	| start   | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:10 UTC | 28 Oct 24 18:11 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC | 28 Oct 24 18:11 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC | 28 Oct 24 18:11 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:12 UTC |
	| start   | -p kubernetes-upgrade-192352   | kubernetes-upgrade-192352 | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-146010         | offline-crio-146010       | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:13 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-165190      | minikube                  | jenkins | v1.26.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:14 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-006166 --memory=2048  | pause-006166              | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:14 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-146010         | offline-crio-146010       | jenkins | v1.34.0 | 28 Oct 24 18:13 UTC | 28 Oct 24 18:13 UTC |
	| start   | -p NoKubernetes-793119         | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:13 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-793119         | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:13 UTC | 28 Oct 24 18:14 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-006166                | pause-006166              | jenkins | v1.34.0 | 28 Oct 24 18:14 UTC | 28 Oct 24 18:14 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-165190 stop    | minikube                  | jenkins | v1.26.0 | 28 Oct 24 18:14 UTC | 28 Oct 24 18:14 UTC |
	| start   | -p stopped-upgrade-165190      | stopped-upgrade-165190    | jenkins | v1.34.0 | 28 Oct 24 18:14 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-793119         | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:14 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:14:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:14:42.701123   57381 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:14:42.701208   57381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:14:42.701211   57381 out.go:358] Setting ErrFile to fd 2...
	I1028 18:14:42.701214   57381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:14:42.701414   57381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:14:42.702076   57381 out.go:352] Setting JSON to false
	I1028 18:14:42.703327   57381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7026,"bootTime":1730132257,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:14:42.703438   57381 start.go:139] virtualization: kvm guest
	I1028 18:14:42.705511   57381 out.go:177] * [NoKubernetes-793119] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:14:42.706800   57381 notify.go:220] Checking for updates...
	I1028 18:14:42.706839   57381 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:14:42.708375   57381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:14:42.709802   57381 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:14:42.711017   57381 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:14:42.712191   57381 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:14:42.713440   57381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:14:42.714935   57381 config.go:182] Loaded profile config "NoKubernetes-793119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:14:42.715312   57381 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:42.715353   57381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:42.731775   57381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1028 18:14:42.732235   57381 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:42.732850   57381 main.go:141] libmachine: Using API Version  1
	I1028 18:14:42.732864   57381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:42.733190   57381 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:42.733371   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:42.733496   57381 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1028 18:14:42.733555   57381 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1028 18:14:42.733570   57381 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:14:42.733831   57381 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:42.733859   57381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:42.748227   57381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I1028 18:14:42.748578   57381 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:42.749038   57381 main.go:141] libmachine: Using API Version  1
	I1028 18:14:42.749067   57381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:42.749421   57381 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:42.749610   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:42.786074   57381 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:14:42.787201   57381 start.go:297] selected driver: kvm2
	I1028 18:14:42.787208   57381 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-793119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-793119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:14:42.787310   57381 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:14:42.787559   57381 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1028 18:14:42.787614   57381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:14:42.787674   57381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:14:42.802574   57381 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:14:42.803306   57381 cni.go:84] Creating CNI manager for ""
	I1028 18:14:42.803352   57381 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:14:42.803362   57381 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1028 18:14:42.803401   57381 start.go:340] cluster config:
	{Name:NoKubernetes-793119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-793119 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:14:42.803506   57381 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:14:42.805004   57381 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-793119
	I1028 18:14:40.352525   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:40.353138   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | unable to find current IP address of domain stopped-upgrade-165190 in network mk-stopped-upgrade-165190
	I1028 18:14:40.353175   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | I1028 18:14:40.353092   57197 retry.go:31] will retry after 2.687299553s: waiting for machine to come up
	I1028 18:14:43.041660   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:43.042201   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | unable to find current IP address of domain stopped-upgrade-165190 in network mk-stopped-upgrade-165190
	I1028 18:14:43.042221   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | I1028 18:14:43.042166   57197 retry.go:31] will retry after 2.871090512s: waiting for machine to come up
	I1028 18:14:42.806061   57381 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1028 18:14:42.965857   57381 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1028 18:14:42.965993   57381 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/NoKubernetes-793119/config.json ...
	I1028 18:14:42.966232   57381 start.go:360] acquireMachinesLock for NoKubernetes-793119: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:14:47.212769   57381 start.go:364] duration metric: took 4.246500869s to acquireMachinesLock for "NoKubernetes-793119"
	I1028 18:14:47.212807   57381 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:14:47.212813   57381 fix.go:54] fixHost starting: 
	I1028 18:14:47.213239   57381 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:47.213277   57381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:47.229934   57381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I1028 18:14:47.230309   57381 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:47.230936   57381 main.go:141] libmachine: Using API Version  1
	I1028 18:14:47.230958   57381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:47.231296   57381 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:47.231548   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:47.231701   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetState
	I1028 18:14:47.233187   57381 fix.go:112] recreateIfNeeded on NoKubernetes-793119: state=Running err=<nil>
	W1028 18:14:47.233197   57381 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:14:47.235057   57381 out.go:177] * Updating the running kvm2 "NoKubernetes-793119" VM ...
	I1028 18:14:42.974898   56823 pod_ready.go:93] pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:42.974924   56823 pod_ready.go:82] duration metric: took 1.50688432s for pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:42.974934   56823 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:44.981263   56823 pod_ready.go:103] pod "etcd-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:46.482979   56823 pod_ready.go:93] pod "etcd-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:46.483002   56823 pod_ready.go:82] duration metric: took 3.508061485s for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:46.483011   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:47.236231   57381 machine.go:93] provisionDockerMachine start ...
	I1028 18:14:47.236242   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:47.236424   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.238934   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.239356   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.239379   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.239531   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.239694   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.239854   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.240008   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.240187   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.240430   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.240437   57381 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:14:47.354092   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-793119
	
	I1028 18:14:47.354113   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetMachineName
	I1028 18:14:47.354356   57381 buildroot.go:166] provisioning hostname "NoKubernetes-793119"
	I1028 18:14:47.354373   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetMachineName
	I1028 18:14:47.354558   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.357434   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.357739   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.357757   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.357885   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.358064   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.358221   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.358347   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.358518   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.358759   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.358770   57381 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-793119 && echo "NoKubernetes-793119" | sudo tee /etc/hostname
	I1028 18:14:47.485250   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-793119
	
	I1028 18:14:47.485281   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.488330   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.488731   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.488758   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.488971   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.489148   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.489317   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.489514   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.489681   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.489909   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.489928   57381 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-793119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-793119/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-793119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:14:47.609898   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:14:47.609914   57381 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:14:47.609926   57381 buildroot.go:174] setting up certificates
	I1028 18:14:47.609933   57381 provision.go:84] configureAuth start
	I1028 18:14:47.609940   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetMachineName
	I1028 18:14:47.610195   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetIP
	I1028 18:14:47.612768   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.613118   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.613139   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.613252   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.615800   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.616097   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.616126   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.616313   57381 provision.go:143] copyHostCerts
	I1028 18:14:47.616362   57381 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:14:47.616369   57381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:14:47.616421   57381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:14:47.616536   57381 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:14:47.616542   57381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:14:47.616571   57381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:14:47.616628   57381 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:14:47.616631   57381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:14:47.616647   57381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:14:47.616686   57381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-793119 san=[127.0.0.1 192.168.39.133 NoKubernetes-793119 localhost minikube]
	I1028 18:14:45.915547   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.916062   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has current primary IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.916090   57162 main.go:141] libmachine: (stopped-upgrade-165190) Found IP for machine: 192.168.72.163
	I1028 18:14:45.916099   57162 main.go:141] libmachine: (stopped-upgrade-165190) Reserving static IP address...
	I1028 18:14:45.916535   57162 main.go:141] libmachine: (stopped-upgrade-165190) Reserved static IP address: 192.168.72.163
	I1028 18:14:45.916572   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "stopped-upgrade-165190", mac: "52:54:00:e0:e3:ee", ip: "192.168.72.163"} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:45.916584   57162 main.go:141] libmachine: (stopped-upgrade-165190) Waiting for SSH to be available...
	I1028 18:14:45.916607   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | skip adding static IP to network mk-stopped-upgrade-165190 - found existing host DHCP lease matching {name: "stopped-upgrade-165190", mac: "52:54:00:e0:e3:ee", ip: "192.168.72.163"}
	I1028 18:14:45.916619   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | Getting to WaitForSSH function...
	I1028 18:14:45.918629   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.918855   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:45.918881   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.918992   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | Using SSH client type: external
	I1028 18:14:45.919015   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa (-rw-------)
	I1028 18:14:45.919074   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:14:45.919087   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | About to run SSH command:
	I1028 18:14:45.919101   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | exit 0
	I1028 18:14:46.012083   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | SSH cmd err, output: <nil>: 
	I1028 18:14:46.012391   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetConfigRaw
	I1028 18:14:46.012993   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:46.015414   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.015768   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.015805   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.015982   57162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/config.json ...
	I1028 18:14:46.016201   57162 machine.go:93] provisionDockerMachine start ...
	I1028 18:14:46.016224   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:46.016422   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.018467   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.018771   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.018798   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.018921   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.019086   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.019214   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.019321   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.019435   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.019600   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.019610   57162 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:14:46.148217   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:14:46.148248   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetMachineName
	I1028 18:14:46.148533   57162 buildroot.go:166] provisioning hostname "stopped-upgrade-165190"
	I1028 18:14:46.148579   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetMachineName
	I1028 18:14:46.148769   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.151723   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.152116   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.152141   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.152269   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.152448   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.152604   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.152742   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.152903   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.153117   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.153131   57162 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-165190 && echo "stopped-upgrade-165190" | sudo tee /etc/hostname
	I1028 18:14:46.292013   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-165190
	
	I1028 18:14:46.292039   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.294674   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.295023   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.295054   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.295200   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.295401   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.295557   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.295714   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.295864   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.296086   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.296104   57162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-165190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-165190/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-165190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:14:46.431416   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:14:46.431450   57162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:14:46.431484   57162 buildroot.go:174] setting up certificates
	I1028 18:14:46.431495   57162 provision.go:84] configureAuth start
	I1028 18:14:46.431508   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetMachineName
	I1028 18:14:46.431793   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:46.434422   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.434771   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.434814   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.434930   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.437105   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.437455   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.437480   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.437633   57162 provision.go:143] copyHostCerts
	I1028 18:14:46.437700   57162 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:14:46.437715   57162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:14:46.437784   57162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:14:46.437974   57162 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:14:46.437988   57162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:14:46.438047   57162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:14:46.438164   57162 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:14:46.438178   57162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:14:46.438208   57162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:14:46.438288   57162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-165190 san=[127.0.0.1 192.168.72.163 localhost minikube stopped-upgrade-165190]
	I1028 18:14:46.513773   57162 provision.go:177] copyRemoteCerts
	I1028 18:14:46.513820   57162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:14:46.513841   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.516336   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.516695   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.516744   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.516838   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.516996   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.517120   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.517225   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	I1028 18:14:46.608414   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:14:46.629258   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:14:46.649510   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:14:46.669256   57162 provision.go:87] duration metric: took 237.749319ms to configureAuth
	I1028 18:14:46.669283   57162 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:14:46.669440   57162 config.go:182] Loaded profile config "stopped-upgrade-165190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 18:14:46.669518   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.672106   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.672527   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.672554   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.672730   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.672917   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.673100   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.673254   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.673423   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.673610   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.673636   57162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:14:46.953411   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:14:46.953442   57162 machine.go:96] duration metric: took 937.224839ms to provisionDockerMachine
	I1028 18:14:46.953455   57162 start.go:293] postStartSetup for "stopped-upgrade-165190" (driver="kvm2")
	I1028 18:14:46.953468   57162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:14:46.953488   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:46.953810   57162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:14:46.953844   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.956629   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.956996   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.957024   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.957239   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.957441   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.957614   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.957792   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	I1028 18:14:47.049798   57162 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:14:47.054473   57162 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 18:14:47.054502   57162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:14:47.054570   57162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:14:47.054663   57162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:14:47.054762   57162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:14:47.063613   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:14:47.083980   57162 start.go:296] duration metric: took 130.511383ms for postStartSetup
	I1028 18:14:47.084017   57162 fix.go:56] duration metric: took 18.045245472s for fixHost
	I1028 18:14:47.084055   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:47.086680   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.087034   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.087073   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.087201   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:47.087417   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.087576   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.087705   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:47.087894   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.088111   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:47.088127   57162 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:14:47.212620   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730139287.168031670
	
	I1028 18:14:47.212645   57162 fix.go:216] guest clock: 1730139287.168031670
	I1028 18:14:47.212653   57162 fix.go:229] Guest: 2024-10-28 18:14:47.16803167 +0000 UTC Remote: 2024-10-28 18:14:47.084021957 +0000 UTC m=+18.845357680 (delta=84.009713ms)
	I1028 18:14:47.212674   57162 fix.go:200] guest clock delta is within tolerance: 84.009713ms
	I1028 18:14:47.212680   57162 start.go:83] releasing machines lock for "stopped-upgrade-165190", held for 18.173949868s
	I1028 18:14:47.212707   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.212979   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:47.215356   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.215677   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.215720   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.215862   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.216386   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.216645   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.216729   57162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:14:47.216763   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:47.216889   57162 ssh_runner.go:195] Run: cat /version.json
	I1028 18:14:47.216916   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:47.219470   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.219761   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.219838   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.219864   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.220019   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:47.220110   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.220135   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.220168   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.220258   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:47.220323   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:47.220380   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.220451   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	I1028 18:14:47.220498   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:47.220607   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	W1028 18:14:47.334095   57162 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 18:14:47.334165   57162 ssh_runner.go:195] Run: systemctl --version
	I1028 18:14:47.339258   57162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:14:47.478811   57162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:14:47.484876   57162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:14:47.484952   57162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:14:47.503289   57162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:14:47.503313   57162 start.go:495] detecting cgroup driver to use...
	I1028 18:14:47.503375   57162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:14:47.516367   57162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:14:47.529044   57162 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:14:47.529106   57162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:14:47.545222   57162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:14:47.556938   57162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:14:47.655040   57162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:14:47.773136   57162 docker.go:233] disabling docker service ...
	I1028 18:14:47.773193   57162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:14:47.786114   57162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:14:47.796592   57162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:14:47.908957   57162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:14:48.040923   57162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:14:48.052803   57162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:14:48.069132   57162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1028 18:14:48.069187   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.076659   57162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:14:48.076704   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.084537   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.092184   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.099789   57162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:14:48.109449   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.117943   57162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.134770   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.143106   57162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:14:48.159927   57162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:14:48.160007   57162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:14:48.176507   57162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:14:48.189971   57162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:14:48.311007   57162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:14:48.442958   57162 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:14:48.443033   57162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:14:48.447791   57162 start.go:563] Will wait 60s for crictl version
	I1028 18:14:48.447850   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:48.451147   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:14:48.484371   57162 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I1028 18:14:48.484456   57162 ssh_runner.go:195] Run: crio --version
	I1028 18:14:48.515253   57162 ssh_runner.go:195] Run: crio --version
	I1028 18:14:48.546318   57162 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I1028 18:14:48.490148   56823 pod_ready.go:103] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:50.991615   56823 pod_ready.go:103] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:47.812309   57381 provision.go:177] copyRemoteCerts
	I1028 18:14:47.812360   57381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:14:47.812398   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.814989   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.815415   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.815434   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.815601   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.815757   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.815921   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.816037   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:47.902973   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:14:47.932048   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 18:14:47.959740   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:14:47.987862   57381 provision.go:87] duration metric: took 377.91902ms to configureAuth
	I1028 18:14:47.987892   57381 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:14:47.988096   57381 config.go:182] Loaded profile config "NoKubernetes-793119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1028 18:14:47.988188   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.991199   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.991506   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.991528   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.991722   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.991904   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.992059   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.992186   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.992317   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.992461   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.992495   57381 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:14:48.547614   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:48.550083   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:48.550440   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:48.550469   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:48.550626   57162 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 18:14:48.554198   57162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:14:48.564277   57162 kubeadm.go:883] updating cluster {Name:stopped-upgrade-165190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stop
ped-upgrade-165190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 18:14:48.564385   57162 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1028 18:14:48.564423   57162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:14:48.596635   57162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I1028 18:14:48.596688   57162 ssh_runner.go:195] Run: which lz4
	I1028 18:14:48.600027   57162 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:14:48.603482   57162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:14:48.603511   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I1028 18:14:50.139294   57162 crio.go:462] duration metric: took 1.539294389s to copy over tarball
	I1028 18:14:50.139355   57162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:14:52.974493   57162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.835111323s)
	I1028 18:14:52.974526   57162 crio.go:469] duration metric: took 2.835208219s to extract the tarball
	I1028 18:14:52.974532   57162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:14:53.019581   57162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:14:53.054559   57162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I1028 18:14:53.054581   57162 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:14:53.054643   57162 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:14:53.054688   57162 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.054697   57162 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.054723   57162 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 18:14:53.054768   57162 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.055243   57162 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.055269   57162 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.055389   57162 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.057282   57162 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.057479   57162 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.057503   57162 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.057622   57162 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.058063   57162 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:14:53.058145   57162 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.058243   57162 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 18:14:53.058243   57162 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.528550   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:14:53.528561   57381 machine.go:96] duration metric: took 6.292323896s to provisionDockerMachine
	I1028 18:14:53.528570   57381 start.go:293] postStartSetup for "NoKubernetes-793119" (driver="kvm2")
	I1028 18:14:53.528578   57381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:14:53.528590   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.528901   57381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:14:53.528922   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.531701   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.532152   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.532173   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.532364   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.532574   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.532736   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.532872   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:53.623568   57381 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:14:53.632015   57381 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:14:53.632032   57381 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:14:53.632099   57381 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:14:53.632195   57381 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:14:53.632311   57381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:14:53.642990   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:14:53.667469   57381 start.go:296] duration metric: took 138.885843ms for postStartSetup
	I1028 18:14:53.667499   57381 fix.go:56] duration metric: took 6.454687723s for fixHost
	I1028 18:14:53.667517   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.670554   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.670885   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.670915   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.671107   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.671309   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.671511   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.671682   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.671945   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:53.672106   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:53.672110   57381 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:14:53.785497   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730139293.752318112
	
	I1028 18:14:53.785509   57381 fix.go:216] guest clock: 1730139293.752318112
	I1028 18:14:53.785517   57381 fix.go:229] Guest: 2024-10-28 18:14:53.752318112 +0000 UTC Remote: 2024-10-28 18:14:53.66750102 +0000 UTC m=+11.005133197 (delta=84.817092ms)
	I1028 18:14:53.785561   57381 fix.go:200] guest clock delta is within tolerance: 84.817092ms
	I1028 18:14:53.785566   57381 start.go:83] releasing machines lock for "NoKubernetes-793119", held for 6.572780299s
	I1028 18:14:53.785593   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.785867   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetIP
	I1028 18:14:53.788901   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.789461   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.789500   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.789687   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.790230   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.790396   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.790511   57381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:14:53.790552   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.790606   57381 ssh_runner.go:195] Run: cat /version.json
	I1028 18:14:53.790623   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.793698   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794082   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794106   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.794122   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794271   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.794422   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.794525   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.794539   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794561   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.794686   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:53.794823   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.794934   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.795061   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.795219   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:53.886067   57381 ssh_runner.go:195] Run: systemctl --version
	I1028 18:14:53.913177   57381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:14:53.490619   56823 pod_ready.go:103] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:53.990589   56823 pod_ready.go:93] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:53.990615   56823 pod_ready.go:82] duration metric: took 7.507597092s for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:53.990630   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.008643   56823 pod_ready.go:93] pod "kube-controller-manager-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:54.008671   56823 pod_ready.go:82] duration metric: took 18.033072ms for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.008688   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.020800   56823 pod_ready.go:93] pod "kube-proxy-5psrd" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:54.020825   56823 pod_ready.go:82] duration metric: took 12.128786ms for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.020838   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.027323   56823 pod_ready.go:93] pod "kube-scheduler-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:54.027347   56823 pod_ready.go:82] duration metric: took 6.500704ms for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.027358   56823 pod_ready.go:39] duration metric: took 12.565610801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:14:54.027377   56823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:14:54.046297   56823 ops.go:34] apiserver oom_adj: -16
	I1028 18:14:54.046320   56823 kubeadm.go:597] duration metric: took 20.728306656s to restartPrimaryControlPlane
	I1028 18:14:54.046330   56823 kubeadm.go:394] duration metric: took 20.960106629s to StartCluster
	I1028 18:14:54.046350   56823 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:14:54.046426   56823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:14:54.047299   56823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:14:54.220729   56823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:14:54.221459   56823 config.go:182] Loaded profile config "pause-006166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:14:54.221527   56823 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:14:54.226345   57381 out.go:177]   - Kubernetes: Stopping ...
	I1028 18:14:54.410251   56823 out.go:177] * Verifying Kubernetes components...
	I1028 18:14:54.645261   56823 out.go:177] * Enabled addons: 
	I1028 18:14:54.645334   57381 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1028 18:14:54.679302   57381 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:14:54.679421   57381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:14:54.722597   57381 cri.go:89] found id: "3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce"
	I1028 18:14:54.722610   57381 cri.go:89] found id: "fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b"
	I1028 18:14:54.722615   57381 cri.go:89] found id: "9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9"
	I1028 18:14:54.722619   57381 cri.go:89] found id: "38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365"
	I1028 18:14:54.722622   57381 cri.go:89] found id: "daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa"
	I1028 18:14:54.722625   57381 cri.go:89] found id: "69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c"
	I1028 18:14:54.722628   57381 cri.go:89] found id: "786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761"
	I1028 18:14:54.722630   57381 cri.go:89] found id: ""
	W1028 18:14:54.722645   57381 kubeadm.go:838] found 7 kube-system containers to stop
	I1028 18:14:54.722652   57381 cri.go:252] Stopping containers: [3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b 9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9 38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365 daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa 69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c 786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761]
	I1028 18:14:54.722742   57381 ssh_runner.go:195] Run: which crictl
	I1028 18:14:54.726595   57381 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b 9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9 38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365 daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa 69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c 786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761
	I1028 18:14:56.803546   57381 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b 9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9 38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365 daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa 69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c 786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761: (2.076914462s)
	I1028 18:14:56.803615   57381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:14:56.821698   57381 out.go:177]   - Kubernetes: Stopped
	I1028 18:14:55.032486   56823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:14:55.171970   56823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:14:55.187665   56823 node_ready.go:35] waiting up to 6m0s for node "pause-006166" to be "Ready" ...
	I1028 18:14:55.215401   56823 addons.go:510] duration metric: took 993.849242ms for enable addons: enabled=[]
	I1028 18:14:55.758845   56823 node_ready.go:49] node "pause-006166" has status "Ready":"True"
	I1028 18:14:55.758870   56823 node_ready.go:38] duration metric: took 571.171084ms for node "pause-006166" to be "Ready" ...
	I1028 18:14:55.758879   56823 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:14:55.763379   56823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.767839   56823 pod_ready.go:93] pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.767857   56823 pod_ready.go:82] duration metric: took 4.449679ms for pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.767866   56823 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.772627   56823 pod_ready.go:93] pod "etcd-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.772652   56823 pod_ready.go:82] duration metric: took 4.780158ms for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.772665   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.776602   56823 pod_ready.go:93] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.776621   56823 pod_ready.go:82] duration metric: took 3.947802ms for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.776630   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.780509   56823 pod_ready.go:93] pod "kube-controller-manager-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.780525   56823 pod_ready.go:82] duration metric: took 3.890345ms for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.780534   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.987875   56823 pod_ready.go:93] pod "kube-proxy-5psrd" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.987898   56823 pod_ready.go:82] duration metric: took 207.358426ms for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.987912   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:56.530089   56823 pod_ready.go:93] pod "kube-scheduler-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:56.530111   56823 pod_ready.go:82] duration metric: took 542.192667ms for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:56.530120   56823 pod_ready.go:39] duration metric: took 771.232869ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:14:56.530133   56823 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:14:56.530181   56823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:14:56.544218   56823 api_server.go:72] duration metric: took 2.323439848s to wait for apiserver process to appear ...
	I1028 18:14:56.544240   56823 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:14:56.544257   56823 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8443/healthz ...
	I1028 18:14:56.549124   56823 api_server.go:279] https://192.168.61.105:8443/healthz returned 200:
	ok
	I1028 18:14:56.549903   56823 api_server.go:141] control plane version: v1.31.2
	I1028 18:14:56.549920   56823 api_server.go:131] duration metric: took 5.674736ms to wait for apiserver health ...
	I1028 18:14:56.549928   56823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:14:56.589847   56823 system_pods.go:59] 6 kube-system pods found
	I1028 18:14:56.589871   56823 system_pods.go:61] "coredns-7c65d6cfc9-g4r99" [9b3280f5-4031-4d12-ba29-18994efa2753] Running
	I1028 18:14:56.589875   56823 system_pods.go:61] "etcd-pause-006166" [2172a295-bb1e-4537-bf5d-7e49fd84a4ae] Running
	I1028 18:14:56.589879   56823 system_pods.go:61] "kube-apiserver-pause-006166" [a88b01ba-adb7-4e45-b2b3-e2aed8e432ff] Running
	I1028 18:14:56.589882   56823 system_pods.go:61] "kube-controller-manager-pause-006166" [2752845b-2215-4582-8977-09031047db16] Running
	I1028 18:14:56.589886   56823 system_pods.go:61] "kube-proxy-5psrd" [1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3] Running
	I1028 18:14:56.589889   56823 system_pods.go:61] "kube-scheduler-pause-006166" [7d5d418f-0522-4479-b756-8cda89fdb343] Running
	I1028 18:14:56.589894   56823 system_pods.go:74] duration metric: took 39.961897ms to wait for pod list to return data ...
	I1028 18:14:56.589900   56823 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:14:56.787250   56823 default_sa.go:45] found service account: "default"
	I1028 18:14:56.787281   56823 default_sa.go:55] duration metric: took 197.375279ms for default service account to be created ...
	I1028 18:14:56.787292   56823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:14:56.990987   56823 system_pods.go:86] 6 kube-system pods found
	I1028 18:14:56.991021   56823 system_pods.go:89] "coredns-7c65d6cfc9-g4r99" [9b3280f5-4031-4d12-ba29-18994efa2753] Running
	I1028 18:14:56.991029   56823 system_pods.go:89] "etcd-pause-006166" [2172a295-bb1e-4537-bf5d-7e49fd84a4ae] Running
	I1028 18:14:56.991042   56823 system_pods.go:89] "kube-apiserver-pause-006166" [a88b01ba-adb7-4e45-b2b3-e2aed8e432ff] Running
	I1028 18:14:56.991050   56823 system_pods.go:89] "kube-controller-manager-pause-006166" [2752845b-2215-4582-8977-09031047db16] Running
	I1028 18:14:56.991056   56823 system_pods.go:89] "kube-proxy-5psrd" [1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3] Running
	I1028 18:14:56.991062   56823 system_pods.go:89] "kube-scheduler-pause-006166" [7d5d418f-0522-4479-b756-8cda89fdb343] Running
	I1028 18:14:56.991074   56823 system_pods.go:126] duration metric: took 203.774407ms to wait for k8s-apps to be running ...
	I1028 18:14:56.991087   56823 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:14:56.991134   56823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:14:57.006605   56823 system_svc.go:56] duration metric: took 15.509623ms WaitForService to wait for kubelet
	I1028 18:14:57.006635   56823 kubeadm.go:582] duration metric: took 2.785859208s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:14:57.006657   56823 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:14:57.188918   56823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:14:57.188944   56823 node_conditions.go:123] node cpu capacity is 2
	I1028 18:14:57.188954   56823 node_conditions.go:105] duration metric: took 182.292109ms to run NodePressure ...
	I1028 18:14:57.188966   56823 start.go:241] waiting for startup goroutines ...
	I1028 18:14:57.188972   56823 start.go:246] waiting for cluster config update ...
	I1028 18:14:57.188980   56823 start.go:255] writing updated cluster config ...
	I1028 18:14:57.189229   56823 ssh_runner.go:195] Run: rm -f paused
	I1028 18:14:57.246388   56823 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:14:57.248072   56823 out.go:177] * Done! kubectl is now configured to use "pause-006166" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.934335146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139297934311458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aabeb812-a295-4d95-b908-ebc5e5916f38 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.934925448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7142ffc6-e542-40e7-bf8d-63dd97f0fc73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.934980608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7142ffc6-e542-40e7-bf8d-63dd97f0fc73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.935428269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7142ffc6-e542-40e7-bf8d-63dd97f0fc73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.980610553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e65dcfa5-f135-4c36-abcc-a25ef95fb448 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.980872053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e65dcfa5-f135-4c36-abcc-a25ef95fb448 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.982174803Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f75eef0a-6abf-4943-8b6c-eacf49b330b1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.982518470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139297982498304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f75eef0a-6abf-4943-8b6c-eacf49b330b1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.983152935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1efb5d4a-bba1-4781-9ebf-9e128a0a2238 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.983204791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1efb5d4a-bba1-4781-9ebf-9e128a0a2238 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:57 pause-006166 crio[2614]: time="2024-10-28 18:14:57.983443482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1efb5d4a-bba1-4781-9ebf-9e128a0a2238 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.032805058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be713cbe-2139-41f5-8cb2-0261315af956 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.032927189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be713cbe-2139-41f5-8cb2-0261315af956 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.034908304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=959380e1-93ee-4120-bd49-d787da3bc370 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.035429263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139298035402158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=959380e1-93ee-4120-bd49-d787da3bc370 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.036049713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b4f413a-1dc7-4b85-8ecf-6e3920148609 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.036155607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b4f413a-1dc7-4b85-8ecf-6e3920148609 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.036478687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b4f413a-1dc7-4b85-8ecf-6e3920148609 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.077021210Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce01c0c0-7752-49d5-ad14-1d80a1dc5972 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.077097608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce01c0c0-7752-49d5-ad14-1d80a1dc5972 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.078241800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d87f8863-78ee-47d7-bde6-744abf14c01d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.078584596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139298078560486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d87f8863-78ee-47d7-bde6-744abf14c01d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.079103199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce5a063a-48a2-4e54-9387-8135381b418f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.079154671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce5a063a-48a2-4e54-9387-8135381b418f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:14:58 pause-006166 crio[2614]: time="2024-10-28 18:14:58.080128319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce5a063a-48a2-4e54-9387-8135381b418f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cd9b8c69bb7c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 seconds ago      Running             coredns                   2                   b04fb54c2dabc       coredns-7c65d6cfc9-g4r99
	8229a7929ab0c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   17 seconds ago      Running             kube-proxy                2                   9b4ebd81f9dfa       kube-proxy-5psrd
	9c211dfcb5657       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 seconds ago      Running             kube-apiserver            2                   88868adfab974       kube-apiserver-pause-006166
	caf86c1171b94       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   22 seconds ago      Running             kube-scheduler            2                   94475506e6925       kube-scheduler-pause-006166
	32b777c4c83c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Running             etcd                      2                   63691074101de       etcd-pause-006166
	9c7209ab2a7ee       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   22 seconds ago      Running             kube-controller-manager   2                   fe341f0964661       kube-controller-manager-pause-006166
	d7f73994320d1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   27 seconds ago      Exited              coredns                   1                   d8bf458555813       coredns-7c65d6cfc9-g4r99
	e9a91087ea6b2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   27 seconds ago      Exited              kube-scheduler            1                   8bde608dff59c       kube-scheduler-pause-006166
	a935a4471c892       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   27 seconds ago      Exited              etcd                      1                   0f58c1ff5c9d0       etcd-pause-006166
	c1b5dc9bf9cc2       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   27 seconds ago      Exited              kube-proxy                1                   0db28c281ef5a       kube-proxy-5psrd
	8f31cf1ecf742       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   27 seconds ago      Exited              kube-controller-manager   1                   93e69e8607815       kube-controller-manager-pause-006166
	ae29799650d92       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   28 seconds ago      Exited              kube-apiserver            1                   0cfc575eea00d       kube-apiserver-pause-006166
	
	
	==> coredns [0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35445 - 41977 "HINFO IN 1385094499035695066.4011758787778453954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00772471s
	
	
	==> coredns [d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76] <==
	
	
	==> describe nodes <==
	Name:               pause-006166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-006166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=pause-006166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_13_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-006166
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:14:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.105
	  Hostname:    pause-006166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 987d9680a6d1441092630e72d82ce270
	  System UUID:                987d9680-a6d1-4410-9263-0e72d82ce270
	  Boot ID:                    476a1be5-a37f-4aa5-9fb8-cc800e8f881b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-g4r99                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     64s
	  kube-system                 etcd-pause-006166                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         68s
	  kube-system                 kube-apiserver-pause-006166             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-pause-006166    200m (10%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-proxy-5psrd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-scheduler-pause-006166             100m (5%)     0 (0%)      0 (0%)           0 (0%)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     68s                kubelet          Node pause-006166 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    68s                kubelet          Node pause-006166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  68s                kubelet          Node pause-006166 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                67s                kubelet          Node pause-006166 status is now: NodeReady
	  Normal  RegisteredNode           65s                node-controller  Node pause-006166 event: Registered Node pause-006166 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-006166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-006166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-006166 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-006166 event: Registered Node pause-006166 in Controller
	
	
	==> dmesg <==
	[  +8.475211] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057378] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061230] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.218895] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.135874] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.302052] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.156057] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.579715] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.065215] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.484338] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.109612] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.728043] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +0.673899] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 18:14] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.321084] systemd-fstab-generator[2076]: Ignoring "noauto" option for root device
	[  +0.144045] systemd-fstab-generator[2094]: Ignoring "noauto" option for root device
	[  +0.229027] systemd-fstab-generator[2118]: Ignoring "noauto" option for root device
	[  +0.226231] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.534761] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +1.194438] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +3.219710] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[  +0.076388] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.529833] kauditd_printk_skb: 38 callbacks suppressed
	[ +14.362079] systemd-fstab-generator[3691]: Ignoring "noauto" option for root device
	[  +0.083438] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd] <==
	{"level":"info","ts":"2024-10-28T18:14:37.927093Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:14:37.927137Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T18:14:37.927269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:14:37.928325Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:14:37.928412Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:14:37.929464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.105:2379"}
	{"level":"info","ts":"2024-10-28T18:14:37.929759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-10-28T18:14:55.616877Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.218337894s","expected-duration":"1s"}
	{"level":"warn","ts":"2024-10-28T18:14:55.617086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.218584626s","expected-duration":"100ms","prefix":"","request":"header:<ID:4108851675723447627 > lease_revoke:<id:390592d4557a8fdd>","response":"size:28"}
	{"level":"warn","ts":"2024-10-28T18:14:55.743313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.105335ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4108851675723447628 > lease_revoke:<id:390592d4557a8fbc>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T18:14:55.743426Z","caller":"traceutil/trace.go:171","msg":"trace[992174360] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:491; }","duration":"1.439412503s","start":"2024-10-28T18:14:54.304004Z","end":"2024-10-28T18:14:55.743416Z","steps":["trace[992174360] 'read index received'  (duration: 94.705615ms)","trace[992174360] 'applied index is now lower than readState.Index'  (duration: 1.344706154s)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T18:14:55.743510Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.439501165s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:14:55.743547Z","caller":"traceutil/trace.go:171","msg":"trace[257147906] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:458; }","duration":"1.439541164s","start":"2024-10-28T18:14:54.303996Z","end":"2024-10-28T18:14:55.743537Z","steps":["trace[257147906] 'agreement among raft nodes before linearized reading'  (duration: 1.439489955s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:55.743628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"798.573974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:14:55.743737Z","caller":"traceutil/trace.go:171","msg":"trace[300687829] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:458; }","duration":"798.696814ms","start":"2024-10-28T18:14:54.945032Z","end":"2024-10-28T18:14:55.743729Z","steps":["trace[300687829] 'agreement among raft nodes before linearized reading'  (duration: 798.542628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:55.744320Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:14:54.944996Z","time spent":"799.313162ms","remote":"127.0.0.1:55324","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-28T18:14:55.744092Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"567.17316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-006166\" ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2024-10-28T18:14:55.744858Z","caller":"traceutil/trace.go:171","msg":"trace[4375928] range","detail":"{range_begin:/registry/minions/pause-006166; range_end:; response_count:1; response_revision:458; }","duration":"567.93965ms","start":"2024-10-28T18:14:55.176910Z","end":"2024-10-28T18:14:55.744850Z","steps":["trace[4375928] 'agreement among raft nodes before linearized reading'  (duration: 567.066194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:55.744906Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:14:55.176875Z","time spent":"568.019666ms","remote":"127.0.0.1:55470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":1,"response size":5451,"request content":"key:\"/registry/minions/pause-006166\" "}
	{"level":"warn","ts":"2024-10-28T18:14:56.515289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.802277ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4108851675723447647 > lease_revoke:<id:390592d45643bc16>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T18:14:56.515456Z","caller":"traceutil/trace.go:171","msg":"trace[1108757725] linearizableReadLoop","detail":"{readStateIndex:494; appliedIndex:493; }","duration":"211.374336ms","start":"2024-10-28T18:14:56.304068Z","end":"2024-10-28T18:14:56.515443Z","steps":["trace[1108757725] 'read index received'  (duration: 65.370441ms)","trace[1108757725] 'applied index is now lower than readState.Index'  (duration: 146.002327ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T18:14:56.515569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.50378ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:14:56.515622Z","caller":"traceutil/trace.go:171","msg":"trace[1460295377] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:458; }","duration":"211.572692ms","start":"2024-10-28T18:14:56.304041Z","end":"2024-10-28T18:14:56.515614Z","steps":["trace[1460295377] 'agreement among raft nodes before linearized reading'  (duration: 211.480979ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:56.515589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.752301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-006166\" ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2024-10-28T18:14:56.515906Z","caller":"traceutil/trace.go:171","msg":"trace[413452735] range","detail":"{range_begin:/registry/minions/pause-006166; range_end:; response_count:1; response_revision:458; }","duration":"142.022143ms","start":"2024-10-28T18:14:56.373824Z","end":"2024-10-28T18:14:56.515847Z","steps":["trace[413452735] 'agreement among raft nodes before linearized reading'  (duration: 141.718084ms)"],"step_count":1}
	
	
	==> etcd [a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9] <==
	
	
	==> kernel <==
	 18:14:58 up 1 min,  0 users,  load average: 0.42, 0.18, 0.06
	Linux pause-006166 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652] <==
	I1028 18:14:39.344065       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1028 18:14:39.356413       1 aggregator.go:171] initial CRD sync complete...
	I1028 18:14:39.356452       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 18:14:39.356485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 18:14:39.381112       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 18:14:39.381157       1 policy_source.go:224] refreshing policies
	I1028 18:14:39.411271       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 18:14:39.417866       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 18:14:39.418207       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 18:14:39.418252       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 18:14:39.418379       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 18:14:39.418590       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 18:14:39.418778       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 18:14:39.420614       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 18:14:39.427181       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 18:14:39.461774       1 cache.go:39] Caches are synced for autoregister controller
	I1028 18:14:39.467012       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 18:14:40.338375       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 18:14:41.270052       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 18:14:41.288278       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 18:14:41.357158       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 18:14:41.412143       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 18:14:41.425523       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 18:14:42.884266       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 18:14:43.032276       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7] <==
	I1028 18:14:30.846297       1 options.go:228] external host was not specified, using 192.168.61.105
	I1028 18:14:30.860969       1 server.go:142] Version: v1.31.2
	I1028 18:14:30.861021       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c] <==
	
	
	==> kube-controller-manager [9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775] <==
	I1028 18:14:42.741752       1 shared_informer.go:320] Caches are synced for crt configmap
	I1028 18:14:42.745929       1 shared_informer.go:320] Caches are synced for cronjob
	I1028 18:14:42.748340       1 shared_informer.go:320] Caches are synced for endpoint
	I1028 18:14:42.748756       1 shared_informer.go:320] Caches are synced for taint
	I1028 18:14:42.748866       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 18:14:42.749037       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-006166"
	I1028 18:14:42.749150       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 18:14:42.761407       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 18:14:42.765555       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 18:14:42.793915       1 shared_informer.go:320] Caches are synced for deployment
	I1028 18:14:42.801566       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1028 18:14:42.831005       1 shared_informer.go:320] Caches are synced for disruption
	I1028 18:14:42.840207       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 18:14:42.850908       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 18:14:42.856217       1 shared_informer.go:320] Caches are synced for ephemeral
	I1028 18:14:42.868744       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 18:14:42.879005       1 shared_informer.go:320] Caches are synced for expand
	I1028 18:14:42.917193       1 shared_informer.go:320] Caches are synced for PVC protection
	I1028 18:14:42.930538       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 18:14:42.948043       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 18:14:43.139428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="337.722803ms"
	I1028 18:14:43.139745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="98.709µs"
	I1028 18:14:43.394504       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 18:14:43.412908       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 18:14:43.413007       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:14:40.921357       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:14:40.938904       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.105"]
	E1028 18:14:40.939007       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:14:40.988313       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:14:40.988384       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:14:40.988418       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:14:40.991723       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:14:40.992069       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:14:40.992110       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:14:40.994384       1 config.go:199] "Starting service config controller"
	I1028 18:14:40.994650       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:14:40.994786       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:14:40.994819       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:14:40.995549       1 config.go:328] "Starting node config controller"
	I1028 18:14:40.995589       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:14:41.096120       1 shared_informer.go:320] Caches are synced for node config
	I1028 18:14:41.096299       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:14:41.096432       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9] <==
	
	
	==> kube-scheduler [caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0] <==
	I1028 18:14:37.035435       1 serving.go:386] Generated self-signed cert in-memory
	W1028 18:14:39.363832       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 18:14:39.366200       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 18:14:39.366387       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 18:14:39.366414       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 18:14:39.393562       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 18:14:39.393643       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:14:39.397062       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 18:14:39.397108       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 18:14:39.397273       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 18:14:39.397362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 18:14:39.497497       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8] <==
	
	
	==> kubelet <==
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.706587    3249 kubelet_node_status.go:72] "Attempting to register node" node="pause-006166"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: E1028 18:14:35.707599    3249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.105:8443: connect: connection refused" node="pause-006166"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.810523    3249 scope.go:117] "RemoveContainer" containerID="a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.811137    3249 scope.go:117] "RemoveContainer" containerID="ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.812798    3249 scope.go:117] "RemoveContainer" containerID="8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.814632    3249 scope.go:117] "RemoveContainer" containerID="e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: E1028 18:14:35.929477    3249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-006166?timeout=10s\": dial tcp 192.168.61.105:8443: connect: connection refused" interval="800ms"
	Oct 28 18:14:36 pause-006166 kubelet[3249]: I1028 18:14:36.109202    3249 kubelet_node_status.go:72] "Attempting to register node" node="pause-006166"
	Oct 28 18:14:36 pause-006166 kubelet[3249]: E1028 18:14:36.111570    3249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.105:8443: connect: connection refused" node="pause-006166"
	Oct 28 18:14:36 pause-006166 kubelet[3249]: I1028 18:14:36.913840    3249 kubelet_node_status.go:72] "Attempting to register node" node="pause-006166"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.460628    3249 kubelet_node_status.go:111] "Node was previously registered" node="pause-006166"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.460854    3249 kubelet_node_status.go:75] "Successfully registered node" node="pause-006166"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.460880    3249 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.462148    3249 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.303269    3249 apiserver.go:52] "Watching apiserver"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.317577    3249 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.376849    3249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3-xtables-lock\") pod \"kube-proxy-5psrd\" (UID: \"1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3\") " pod="kube-system/kube-proxy-5psrd"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.376957    3249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3-lib-modules\") pod \"kube-proxy-5psrd\" (UID: \"1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3\") " pod="kube-system/kube-proxy-5psrd"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.609087    3249 scope.go:117] "RemoveContainer" containerID="d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.610377    3249 scope.go:117] "RemoveContainer" containerID="c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9"
	Oct 28 18:14:42 pause-006166 kubelet[3249]: I1028 18:14:42.573104    3249 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 28 18:14:45 pause-006166 kubelet[3249]: E1028 18:14:45.406541    3249 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139285406301822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:14:45 pause-006166 kubelet[3249]: E1028 18:14:45.406587    3249 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139285406301822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:14:55 pause-006166 kubelet[3249]: E1028 18:14:55.407717    3249 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139295407312375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:14:55 pause-006166 kubelet[3249]: E1028 18:14:55.407742    3249 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139295407312375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-006166 -n pause-006166
helpers_test.go:261: (dbg) Run:  kubectl --context pause-006166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-006166 -n pause-006166
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-006166 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-006166 logs -n 25: (1.928716899s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p test-preload-598338         | test-preload-598338       | jenkins | v1.34.0 | 28 Oct 24 18:10 UTC | 28 Oct 24 18:10 UTC |
	| start   | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:10 UTC | 28 Oct 24 18:11 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC | 28 Oct 24 18:11 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:11 UTC | 28 Oct 24 18:11 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-525736       | scheduled-stop-525736     | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:12 UTC |
	| start   | -p kubernetes-upgrade-192352   | kubernetes-upgrade-192352 | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-146010         | offline-crio-146010       | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:13 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-165190      | minikube                  | jenkins | v1.26.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:14 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-006166 --memory=2048  | pause-006166              | jenkins | v1.34.0 | 28 Oct 24 18:12 UTC | 28 Oct 24 18:14 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-146010         | offline-crio-146010       | jenkins | v1.34.0 | 28 Oct 24 18:13 UTC | 28 Oct 24 18:13 UTC |
	| start   | -p NoKubernetes-793119         | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:13 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-793119         | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:13 UTC | 28 Oct 24 18:14 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-006166                | pause-006166              | jenkins | v1.34.0 | 28 Oct 24 18:14 UTC | 28 Oct 24 18:14 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-165190 stop    | minikube                  | jenkins | v1.26.0 | 28 Oct 24 18:14 UTC | 28 Oct 24 18:14 UTC |
	| start   | -p stopped-upgrade-165190      | stopped-upgrade-165190    | jenkins | v1.34.0 | 28 Oct 24 18:14 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-793119         | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:14 UTC | 28 Oct 24 18:14 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-793119         | NoKubernetes-793119       | jenkins | v1.34.0 | 28 Oct 24 18:14 UTC |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:14:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:14:42.701123   57381 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:14:42.701208   57381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:14:42.701211   57381 out.go:358] Setting ErrFile to fd 2...
	I1028 18:14:42.701214   57381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:14:42.701414   57381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:14:42.702076   57381 out.go:352] Setting JSON to false
	I1028 18:14:42.703327   57381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7026,"bootTime":1730132257,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:14:42.703438   57381 start.go:139] virtualization: kvm guest
	I1028 18:14:42.705511   57381 out.go:177] * [NoKubernetes-793119] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:14:42.706800   57381 notify.go:220] Checking for updates...
	I1028 18:14:42.706839   57381 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:14:42.708375   57381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:14:42.709802   57381 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:14:42.711017   57381 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:14:42.712191   57381 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:14:42.713440   57381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:14:42.714935   57381 config.go:182] Loaded profile config "NoKubernetes-793119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:14:42.715312   57381 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:42.715353   57381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:42.731775   57381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1028 18:14:42.732235   57381 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:42.732850   57381 main.go:141] libmachine: Using API Version  1
	I1028 18:14:42.732864   57381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:42.733190   57381 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:42.733371   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:42.733496   57381 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1028 18:14:42.733555   57381 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1028 18:14:42.733570   57381 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:14:42.733831   57381 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:42.733859   57381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:42.748227   57381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I1028 18:14:42.748578   57381 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:42.749038   57381 main.go:141] libmachine: Using API Version  1
	I1028 18:14:42.749067   57381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:42.749421   57381 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:42.749610   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:42.786074   57381 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:14:42.787201   57381 start.go:297] selected driver: kvm2
	I1028 18:14:42.787208   57381 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-793119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-793119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:14:42.787310   57381 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:14:42.787559   57381 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1028 18:14:42.787614   57381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:14:42.787674   57381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:14:42.802574   57381 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:14:42.803306   57381 cni.go:84] Creating CNI manager for ""
	I1028 18:14:42.803352   57381 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:14:42.803362   57381 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1028 18:14:42.803401   57381 start.go:340] cluster config:
	{Name:NoKubernetes-793119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-793119 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:14:42.803506   57381 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:14:42.805004   57381 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-793119
	I1028 18:14:40.352525   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:40.353138   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | unable to find current IP address of domain stopped-upgrade-165190 in network mk-stopped-upgrade-165190
	I1028 18:14:40.353175   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | I1028 18:14:40.353092   57197 retry.go:31] will retry after 2.687299553s: waiting for machine to come up
	I1028 18:14:43.041660   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:43.042201   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | unable to find current IP address of domain stopped-upgrade-165190 in network mk-stopped-upgrade-165190
	I1028 18:14:43.042221   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | I1028 18:14:43.042166   57197 retry.go:31] will retry after 2.871090512s: waiting for machine to come up
	I1028 18:14:42.806061   57381 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1028 18:14:42.965857   57381 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1028 18:14:42.965993   57381 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/NoKubernetes-793119/config.json ...
	I1028 18:14:42.966232   57381 start.go:360] acquireMachinesLock for NoKubernetes-793119: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:14:47.212769   57381 start.go:364] duration metric: took 4.246500869s to acquireMachinesLock for "NoKubernetes-793119"
	I1028 18:14:47.212807   57381 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:14:47.212813   57381 fix.go:54] fixHost starting: 
	I1028 18:14:47.213239   57381 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:14:47.213277   57381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:14:47.229934   57381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I1028 18:14:47.230309   57381 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:14:47.230936   57381 main.go:141] libmachine: Using API Version  1
	I1028 18:14:47.230958   57381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:14:47.231296   57381 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:14:47.231548   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:47.231701   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetState
	I1028 18:14:47.233187   57381 fix.go:112] recreateIfNeeded on NoKubernetes-793119: state=Running err=<nil>
	W1028 18:14:47.233197   57381 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:14:47.235057   57381 out.go:177] * Updating the running kvm2 "NoKubernetes-793119" VM ...
	I1028 18:14:42.974898   56823 pod_ready.go:93] pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:42.974924   56823 pod_ready.go:82] duration metric: took 1.50688432s for pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:42.974934   56823 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:44.981263   56823 pod_ready.go:103] pod "etcd-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:46.482979   56823 pod_ready.go:93] pod "etcd-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:46.483002   56823 pod_ready.go:82] duration metric: took 3.508061485s for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:46.483011   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:47.236231   57381 machine.go:93] provisionDockerMachine start ...
	I1028 18:14:47.236242   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:47.236424   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.238934   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.239356   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.239379   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.239531   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.239694   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.239854   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.240008   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.240187   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.240430   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.240437   57381 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:14:47.354092   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-793119
	
	I1028 18:14:47.354113   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetMachineName
	I1028 18:14:47.354356   57381 buildroot.go:166] provisioning hostname "NoKubernetes-793119"
	I1028 18:14:47.354373   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetMachineName
	I1028 18:14:47.354558   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.357434   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.357739   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.357757   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.357885   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.358064   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.358221   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.358347   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.358518   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.358759   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.358770   57381 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-793119 && echo "NoKubernetes-793119" | sudo tee /etc/hostname
	I1028 18:14:47.485250   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-793119
	
	I1028 18:14:47.485281   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.488330   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.488731   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.488758   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.488971   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.489148   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.489317   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.489514   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.489681   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.489909   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.489928   57381 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-793119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-793119/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-793119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:14:47.609898   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:14:47.609914   57381 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:14:47.609926   57381 buildroot.go:174] setting up certificates
	I1028 18:14:47.609933   57381 provision.go:84] configureAuth start
	I1028 18:14:47.609940   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetMachineName
	I1028 18:14:47.610195   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetIP
	I1028 18:14:47.612768   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.613118   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.613139   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.613252   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.615800   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.616097   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.616126   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.616313   57381 provision.go:143] copyHostCerts
	I1028 18:14:47.616362   57381 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:14:47.616369   57381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:14:47.616421   57381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:14:47.616536   57381 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:14:47.616542   57381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:14:47.616571   57381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:14:47.616628   57381 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:14:47.616631   57381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:14:47.616647   57381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:14:47.616686   57381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-793119 san=[127.0.0.1 192.168.39.133 NoKubernetes-793119 localhost minikube]
	I1028 18:14:45.915547   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.916062   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has current primary IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.916090   57162 main.go:141] libmachine: (stopped-upgrade-165190) Found IP for machine: 192.168.72.163
	I1028 18:14:45.916099   57162 main.go:141] libmachine: (stopped-upgrade-165190) Reserving static IP address...
	I1028 18:14:45.916535   57162 main.go:141] libmachine: (stopped-upgrade-165190) Reserved static IP address: 192.168.72.163
	I1028 18:14:45.916572   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "stopped-upgrade-165190", mac: "52:54:00:e0:e3:ee", ip: "192.168.72.163"} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:45.916584   57162 main.go:141] libmachine: (stopped-upgrade-165190) Waiting for SSH to be available...
	I1028 18:14:45.916607   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | skip adding static IP to network mk-stopped-upgrade-165190 - found existing host DHCP lease matching {name: "stopped-upgrade-165190", mac: "52:54:00:e0:e3:ee", ip: "192.168.72.163"}
	I1028 18:14:45.916619   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | Getting to WaitForSSH function...
	I1028 18:14:45.918629   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.918855   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:45.918881   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:45.918992   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | Using SSH client type: external
	I1028 18:14:45.919015   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa (-rw-------)
	I1028 18:14:45.919074   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:14:45.919087   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | About to run SSH command:
	I1028 18:14:45.919101   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | exit 0
	I1028 18:14:46.012083   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | SSH cmd err, output: <nil>: 
	I1028 18:14:46.012391   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetConfigRaw
	I1028 18:14:46.012993   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:46.015414   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.015768   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.015805   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.015982   57162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/config.json ...
	I1028 18:14:46.016201   57162 machine.go:93] provisionDockerMachine start ...
	I1028 18:14:46.016224   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:46.016422   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.018467   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.018771   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.018798   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.018921   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.019086   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.019214   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.019321   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.019435   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.019600   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.019610   57162 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:14:46.148217   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:14:46.148248   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetMachineName
	I1028 18:14:46.148533   57162 buildroot.go:166] provisioning hostname "stopped-upgrade-165190"
	I1028 18:14:46.148579   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetMachineName
	I1028 18:14:46.148769   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.151723   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.152116   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.152141   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.152269   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.152448   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.152604   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.152742   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.152903   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.153117   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.153131   57162 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-165190 && echo "stopped-upgrade-165190" | sudo tee /etc/hostname
	I1028 18:14:46.292013   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-165190
	
	I1028 18:14:46.292039   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.294674   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.295023   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.295054   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.295200   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.295401   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.295557   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.295714   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.295864   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.296086   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.296104   57162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-165190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-165190/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-165190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:14:46.431416   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:14:46.431450   57162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:14:46.431484   57162 buildroot.go:174] setting up certificates
	I1028 18:14:46.431495   57162 provision.go:84] configureAuth start
	I1028 18:14:46.431508   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetMachineName
	I1028 18:14:46.431793   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:46.434422   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.434771   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.434814   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.434930   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.437105   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.437455   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.437480   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.437633   57162 provision.go:143] copyHostCerts
	I1028 18:14:46.437700   57162 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:14:46.437715   57162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:14:46.437784   57162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:14:46.437974   57162 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:14:46.437988   57162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:14:46.438047   57162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:14:46.438164   57162 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:14:46.438178   57162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:14:46.438208   57162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:14:46.438288   57162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-165190 san=[127.0.0.1 192.168.72.163 localhost minikube stopped-upgrade-165190]
	I1028 18:14:46.513773   57162 provision.go:177] copyRemoteCerts
	I1028 18:14:46.513820   57162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:14:46.513841   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.516336   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.516695   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.516744   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.516838   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.516996   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.517120   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.517225   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	I1028 18:14:46.608414   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:14:46.629258   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:14:46.649510   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:14:46.669256   57162 provision.go:87] duration metric: took 237.749319ms to configureAuth
	I1028 18:14:46.669283   57162 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:14:46.669440   57162 config.go:182] Loaded profile config "stopped-upgrade-165190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 18:14:46.669518   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.672106   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.672527   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.672554   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.672730   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.672917   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.673100   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.673254   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.673423   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:46.673610   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:46.673636   57162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:14:46.953411   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:14:46.953442   57162 machine.go:96] duration metric: took 937.224839ms to provisionDockerMachine
	I1028 18:14:46.953455   57162 start.go:293] postStartSetup for "stopped-upgrade-165190" (driver="kvm2")
	I1028 18:14:46.953468   57162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:14:46.953488   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:46.953810   57162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:14:46.953844   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:46.956629   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.956996   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:46.957024   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:46.957239   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:46.957441   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:46.957614   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:46.957792   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	I1028 18:14:47.049798   57162 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:14:47.054473   57162 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 18:14:47.054502   57162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:14:47.054570   57162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:14:47.054663   57162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:14:47.054762   57162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:14:47.063613   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:14:47.083980   57162 start.go:296] duration metric: took 130.511383ms for postStartSetup
	I1028 18:14:47.084017   57162 fix.go:56] duration metric: took 18.045245472s for fixHost
	I1028 18:14:47.084055   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:47.086680   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.087034   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.087073   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.087201   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:47.087417   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.087576   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.087705   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:47.087894   57162 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.088111   57162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I1028 18:14:47.088127   57162 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:14:47.212620   57162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730139287.168031670
	
	I1028 18:14:47.212645   57162 fix.go:216] guest clock: 1730139287.168031670
	I1028 18:14:47.212653   57162 fix.go:229] Guest: 2024-10-28 18:14:47.16803167 +0000 UTC Remote: 2024-10-28 18:14:47.084021957 +0000 UTC m=+18.845357680 (delta=84.009713ms)
	I1028 18:14:47.212674   57162 fix.go:200] guest clock delta is within tolerance: 84.009713ms
	I1028 18:14:47.212680   57162 start.go:83] releasing machines lock for "stopped-upgrade-165190", held for 18.173949868s
	I1028 18:14:47.212707   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.212979   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:47.215356   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.215677   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.215720   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.215862   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.216386   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.216645   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .DriverName
	I1028 18:14:47.216729   57162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:14:47.216763   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:47.216889   57162 ssh_runner.go:195] Run: cat /version.json
	I1028 18:14:47.216916   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHHostname
	I1028 18:14:47.219470   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.219761   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.219838   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.219864   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.220019   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:47.220110   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:47.220135   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:47.220168   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.220258   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHPort
	I1028 18:14:47.220323   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:47.220380   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHKeyPath
	I1028 18:14:47.220451   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	I1028 18:14:47.220498   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetSSHUsername
	I1028 18:14:47.220607   57162 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/stopped-upgrade-165190/id_rsa Username:docker}
	W1028 18:14:47.334095   57162 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 18:14:47.334165   57162 ssh_runner.go:195] Run: systemctl --version
	I1028 18:14:47.339258   57162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:14:47.478811   57162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:14:47.484876   57162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:14:47.484952   57162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:14:47.503289   57162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:14:47.503313   57162 start.go:495] detecting cgroup driver to use...
	I1028 18:14:47.503375   57162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:14:47.516367   57162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:14:47.529044   57162 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:14:47.529106   57162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:14:47.545222   57162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:14:47.556938   57162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:14:47.655040   57162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:14:47.773136   57162 docker.go:233] disabling docker service ...
	I1028 18:14:47.773193   57162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:14:47.786114   57162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:14:47.796592   57162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:14:47.908957   57162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:14:48.040923   57162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:14:48.052803   57162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:14:48.069132   57162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1028 18:14:48.069187   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.076659   57162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:14:48.076704   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.084537   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.092184   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.099789   57162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:14:48.109449   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.117943   57162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.134770   57162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:48.143106   57162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:14:48.159927   57162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:14:48.160007   57162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:14:48.176507   57162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:14:48.189971   57162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:14:48.311007   57162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:14:48.442958   57162 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:14:48.443033   57162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:14:48.447791   57162 start.go:563] Will wait 60s for crictl version
	I1028 18:14:48.447850   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:48.451147   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:14:48.484371   57162 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I1028 18:14:48.484456   57162 ssh_runner.go:195] Run: crio --version
	I1028 18:14:48.515253   57162 ssh_runner.go:195] Run: crio --version
	I1028 18:14:48.546318   57162 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I1028 18:14:48.490148   56823 pod_ready.go:103] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:50.991615   56823 pod_ready.go:103] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:47.812309   57381 provision.go:177] copyRemoteCerts
	I1028 18:14:47.812360   57381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:14:47.812398   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.814989   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.815415   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.815434   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.815601   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.815757   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.815921   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.816037   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:47.902973   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:14:47.932048   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 18:14:47.959740   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:14:47.987862   57381 provision.go:87] duration metric: took 377.91902ms to configureAuth
	I1028 18:14:47.987892   57381 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:14:47.988096   57381 config.go:182] Loaded profile config "NoKubernetes-793119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1028 18:14:47.988188   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:47.991199   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.991506   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:47.991528   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:47.991722   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:47.991904   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.992059   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:47.992186   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:47.992317   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:47.992461   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:47.992495   57381 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:14:48.547614   57162 main.go:141] libmachine: (stopped-upgrade-165190) Calling .GetIP
	I1028 18:14:48.550083   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:48.550440   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e3:ee", ip: ""} in network mk-stopped-upgrade-165190: {Iface:virbr4 ExpiryTime:2024-10-28 19:14:40 +0000 UTC Type:0 Mac:52:54:00:e0:e3:ee Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:stopped-upgrade-165190 Clientid:01:52:54:00:e0:e3:ee}
	I1028 18:14:48.550469   57162 main.go:141] libmachine: (stopped-upgrade-165190) DBG | domain stopped-upgrade-165190 has defined IP address 192.168.72.163 and MAC address 52:54:00:e0:e3:ee in network mk-stopped-upgrade-165190
	I1028 18:14:48.550626   57162 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 18:14:48.554198   57162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:14:48.564277   57162 kubeadm.go:883] updating cluster {Name:stopped-upgrade-165190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stop
ped-upgrade-165190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 18:14:48.564385   57162 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1028 18:14:48.564423   57162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:14:48.596635   57162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I1028 18:14:48.596688   57162 ssh_runner.go:195] Run: which lz4
	I1028 18:14:48.600027   57162 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:14:48.603482   57162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:14:48.603511   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I1028 18:14:50.139294   57162 crio.go:462] duration metric: took 1.539294389s to copy over tarball
	I1028 18:14:50.139355   57162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:14:52.974493   57162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.835111323s)
	I1028 18:14:52.974526   57162 crio.go:469] duration metric: took 2.835208219s to extract the tarball
	I1028 18:14:52.974532   57162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:14:53.019581   57162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:14:53.054559   57162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I1028 18:14:53.054581   57162 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:14:53.054643   57162 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:14:53.054688   57162 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.054697   57162 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.054723   57162 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 18:14:53.054768   57162 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.055243   57162 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.055269   57162 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.055389   57162 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.057282   57162 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.057479   57162 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.057503   57162 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.057622   57162 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.058063   57162 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:14:53.058145   57162 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.058243   57162 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 18:14:53.058243   57162 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.528550   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:14:53.528561   57381 machine.go:96] duration metric: took 6.292323896s to provisionDockerMachine
	I1028 18:14:53.528570   57381 start.go:293] postStartSetup for "NoKubernetes-793119" (driver="kvm2")
	I1028 18:14:53.528578   57381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:14:53.528590   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.528901   57381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:14:53.528922   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.531701   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.532152   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.532173   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.532364   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.532574   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.532736   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.532872   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:53.623568   57381 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:14:53.632015   57381 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:14:53.632032   57381 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:14:53.632099   57381 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:14:53.632195   57381 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:14:53.632311   57381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:14:53.642990   57381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:14:53.667469   57381 start.go:296] duration metric: took 138.885843ms for postStartSetup
	I1028 18:14:53.667499   57381 fix.go:56] duration metric: took 6.454687723s for fixHost
	I1028 18:14:53.667517   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.670554   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.670885   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.670915   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.671107   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.671309   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.671511   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.671682   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.671945   57381 main.go:141] libmachine: Using SSH client type: native
	I1028 18:14:53.672106   57381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1028 18:14:53.672110   57381 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:14:53.785497   57381 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730139293.752318112
	
	I1028 18:14:53.785509   57381 fix.go:216] guest clock: 1730139293.752318112
	I1028 18:14:53.785517   57381 fix.go:229] Guest: 2024-10-28 18:14:53.752318112 +0000 UTC Remote: 2024-10-28 18:14:53.66750102 +0000 UTC m=+11.005133197 (delta=84.817092ms)
	I1028 18:14:53.785561   57381 fix.go:200] guest clock delta is within tolerance: 84.817092ms
	I1028 18:14:53.785566   57381 start.go:83] releasing machines lock for "NoKubernetes-793119", held for 6.572780299s
	I1028 18:14:53.785593   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.785867   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetIP
	I1028 18:14:53.788901   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.789461   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.789500   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.789687   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.790230   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.790396   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .DriverName
	I1028 18:14:53.790511   57381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:14:53.790552   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.790606   57381 ssh_runner.go:195] Run: cat /version.json
	I1028 18:14:53.790623   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHHostname
	I1028 18:14:53.793698   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794082   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794106   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.794122   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794271   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.794422   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.794525   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:a1:04", ip: ""} in network mk-NoKubernetes-793119: {Iface:virbr1 ExpiryTime:2024-10-28 19:14:13 +0000 UTC Type:0 Mac:52:54:00:62:a1:04 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:NoKubernetes-793119 Clientid:01:52:54:00:62:a1:04}
	I1028 18:14:53.794539   57381 main.go:141] libmachine: (NoKubernetes-793119) DBG | domain NoKubernetes-793119 has defined IP address 192.168.39.133 and MAC address 52:54:00:62:a1:04 in network mk-NoKubernetes-793119
	I1028 18:14:53.794561   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.794686   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:53.794823   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHPort
	I1028 18:14:53.794934   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHKeyPath
	I1028 18:14:53.795061   57381 main.go:141] libmachine: (NoKubernetes-793119) Calling .GetSSHUsername
	I1028 18:14:53.795219   57381 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/NoKubernetes-793119/id_rsa Username:docker}
	I1028 18:14:53.886067   57381 ssh_runner.go:195] Run: systemctl --version
	I1028 18:14:53.913177   57381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:14:53.490619   56823 pod_ready.go:103] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"False"
	I1028 18:14:53.990589   56823 pod_ready.go:93] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:53.990615   56823 pod_ready.go:82] duration metric: took 7.507597092s for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:53.990630   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.008643   56823 pod_ready.go:93] pod "kube-controller-manager-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:54.008671   56823 pod_ready.go:82] duration metric: took 18.033072ms for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.008688   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.020800   56823 pod_ready.go:93] pod "kube-proxy-5psrd" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:54.020825   56823 pod_ready.go:82] duration metric: took 12.128786ms for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.020838   56823 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.027323   56823 pod_ready.go:93] pod "kube-scheduler-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:54.027347   56823 pod_ready.go:82] duration metric: took 6.500704ms for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:54.027358   56823 pod_ready.go:39] duration metric: took 12.565610801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:14:54.027377   56823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:14:54.046297   56823 ops.go:34] apiserver oom_adj: -16
	I1028 18:14:54.046320   56823 kubeadm.go:597] duration metric: took 20.728306656s to restartPrimaryControlPlane
	I1028 18:14:54.046330   56823 kubeadm.go:394] duration metric: took 20.960106629s to StartCluster
	I1028 18:14:54.046350   56823 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:14:54.046426   56823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:14:54.047299   56823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:14:54.220729   56823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:14:54.221459   56823 config.go:182] Loaded profile config "pause-006166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:14:54.221527   56823 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:14:54.226345   57381 out.go:177]   - Kubernetes: Stopping ...
	I1028 18:14:54.410251   56823 out.go:177] * Verifying Kubernetes components...
	I1028 18:14:54.645261   56823 out.go:177] * Enabled addons: 
	I1028 18:14:54.645334   57381 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1028 18:14:54.679302   57381 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:14:54.679421   57381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:14:54.722597   57381 cri.go:89] found id: "3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce"
	I1028 18:14:54.722610   57381 cri.go:89] found id: "fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b"
	I1028 18:14:54.722615   57381 cri.go:89] found id: "9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9"
	I1028 18:14:54.722619   57381 cri.go:89] found id: "38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365"
	I1028 18:14:54.722622   57381 cri.go:89] found id: "daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa"
	I1028 18:14:54.722625   57381 cri.go:89] found id: "69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c"
	I1028 18:14:54.722628   57381 cri.go:89] found id: "786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761"
	I1028 18:14:54.722630   57381 cri.go:89] found id: ""
	W1028 18:14:54.722645   57381 kubeadm.go:838] found 7 kube-system containers to stop
	I1028 18:14:54.722652   57381 cri.go:252] Stopping containers: [3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b 9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9 38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365 daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa 69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c 786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761]
	I1028 18:14:54.722742   57381 ssh_runner.go:195] Run: which crictl
	I1028 18:14:54.726595   57381 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b 9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9 38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365 daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa 69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c 786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761
	I1028 18:14:56.803546   57381 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3c431c42c03b6a9eeee6a6a09a21549d1b3baf1d64e72a9c1309be0f5153b5ce fea3af7e5a5f7e0de5ab90e5eb692a64424dff7bd6e1cd35afba360fc8ee251b 9678cb5e74b052b2828b47639fc1930692bbdff410355502e9329663009dccc9 38fae466424e0162abc26510977dfdf8390dc71864bf332499edbb5a8b455365 daad61cef2d0ee5ff826dffb8cf45dd098ca106566cec819357188e38e6682fa 69887f6b5b81795801aeb14c83fd3e3b1eea5e8bf672946be278fdc67cc4ec4c 786b824323891e1794a3365266e2881e3ad3e9c0746c9db3366e068e966b4761: (2.076914462s)
	I1028 18:14:56.803615   57381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:14:56.821698   57381 out.go:177]   - Kubernetes: Stopped
	I1028 18:14:55.032486   56823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:14:55.171970   56823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:14:55.187665   56823 node_ready.go:35] waiting up to 6m0s for node "pause-006166" to be "Ready" ...
	I1028 18:14:55.215401   56823 addons.go:510] duration metric: took 993.849242ms for enable addons: enabled=[]
	I1028 18:14:55.758845   56823 node_ready.go:49] node "pause-006166" has status "Ready":"True"
	I1028 18:14:55.758870   56823 node_ready.go:38] duration metric: took 571.171084ms for node "pause-006166" to be "Ready" ...
	I1028 18:14:55.758879   56823 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:14:55.763379   56823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.767839   56823 pod_ready.go:93] pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.767857   56823 pod_ready.go:82] duration metric: took 4.449679ms for pod "coredns-7c65d6cfc9-g4r99" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.767866   56823 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.772627   56823 pod_ready.go:93] pod "etcd-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.772652   56823 pod_ready.go:82] duration metric: took 4.780158ms for pod "etcd-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.772665   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.776602   56823 pod_ready.go:93] pod "kube-apiserver-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.776621   56823 pod_ready.go:82] duration metric: took 3.947802ms for pod "kube-apiserver-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.776630   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.780509   56823 pod_ready.go:93] pod "kube-controller-manager-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.780525   56823 pod_ready.go:82] duration metric: took 3.890345ms for pod "kube-controller-manager-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.780534   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.987875   56823 pod_ready.go:93] pod "kube-proxy-5psrd" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:55.987898   56823 pod_ready.go:82] duration metric: took 207.358426ms for pod "kube-proxy-5psrd" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:55.987912   56823 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:56.530089   56823 pod_ready.go:93] pod "kube-scheduler-pause-006166" in "kube-system" namespace has status "Ready":"True"
	I1028 18:14:56.530111   56823 pod_ready.go:82] duration metric: took 542.192667ms for pod "kube-scheduler-pause-006166" in "kube-system" namespace to be "Ready" ...
	I1028 18:14:56.530120   56823 pod_ready.go:39] duration metric: took 771.232869ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:14:56.530133   56823 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:14:56.530181   56823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:14:56.544218   56823 api_server.go:72] duration metric: took 2.323439848s to wait for apiserver process to appear ...
	I1028 18:14:56.544240   56823 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:14:56.544257   56823 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8443/healthz ...
	I1028 18:14:56.549124   56823 api_server.go:279] https://192.168.61.105:8443/healthz returned 200:
	ok
	I1028 18:14:56.549903   56823 api_server.go:141] control plane version: v1.31.2
	I1028 18:14:56.549920   56823 api_server.go:131] duration metric: took 5.674736ms to wait for apiserver health ...
	I1028 18:14:56.549928   56823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:14:56.589847   56823 system_pods.go:59] 6 kube-system pods found
	I1028 18:14:56.589871   56823 system_pods.go:61] "coredns-7c65d6cfc9-g4r99" [9b3280f5-4031-4d12-ba29-18994efa2753] Running
	I1028 18:14:56.589875   56823 system_pods.go:61] "etcd-pause-006166" [2172a295-bb1e-4537-bf5d-7e49fd84a4ae] Running
	I1028 18:14:56.589879   56823 system_pods.go:61] "kube-apiserver-pause-006166" [a88b01ba-adb7-4e45-b2b3-e2aed8e432ff] Running
	I1028 18:14:56.589882   56823 system_pods.go:61] "kube-controller-manager-pause-006166" [2752845b-2215-4582-8977-09031047db16] Running
	I1028 18:14:56.589886   56823 system_pods.go:61] "kube-proxy-5psrd" [1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3] Running
	I1028 18:14:56.589889   56823 system_pods.go:61] "kube-scheduler-pause-006166" [7d5d418f-0522-4479-b756-8cda89fdb343] Running
	I1028 18:14:56.589894   56823 system_pods.go:74] duration metric: took 39.961897ms to wait for pod list to return data ...
	I1028 18:14:56.589900   56823 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:14:56.787250   56823 default_sa.go:45] found service account: "default"
	I1028 18:14:56.787281   56823 default_sa.go:55] duration metric: took 197.375279ms for default service account to be created ...
	I1028 18:14:56.787292   56823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:14:56.990987   56823 system_pods.go:86] 6 kube-system pods found
	I1028 18:14:56.991021   56823 system_pods.go:89] "coredns-7c65d6cfc9-g4r99" [9b3280f5-4031-4d12-ba29-18994efa2753] Running
	I1028 18:14:56.991029   56823 system_pods.go:89] "etcd-pause-006166" [2172a295-bb1e-4537-bf5d-7e49fd84a4ae] Running
	I1028 18:14:56.991042   56823 system_pods.go:89] "kube-apiserver-pause-006166" [a88b01ba-adb7-4e45-b2b3-e2aed8e432ff] Running
	I1028 18:14:56.991050   56823 system_pods.go:89] "kube-controller-manager-pause-006166" [2752845b-2215-4582-8977-09031047db16] Running
	I1028 18:14:56.991056   56823 system_pods.go:89] "kube-proxy-5psrd" [1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3] Running
	I1028 18:14:56.991062   56823 system_pods.go:89] "kube-scheduler-pause-006166" [7d5d418f-0522-4479-b756-8cda89fdb343] Running
	I1028 18:14:56.991074   56823 system_pods.go:126] duration metric: took 203.774407ms to wait for k8s-apps to be running ...
	I1028 18:14:56.991087   56823 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:14:56.991134   56823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:14:57.006605   56823 system_svc.go:56] duration metric: took 15.509623ms WaitForService to wait for kubelet
	I1028 18:14:57.006635   56823 kubeadm.go:582] duration metric: took 2.785859208s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:14:57.006657   56823 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:14:57.188918   56823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:14:57.188944   56823 node_conditions.go:123] node cpu capacity is 2
	I1028 18:14:57.188954   56823 node_conditions.go:105] duration metric: took 182.292109ms to run NodePressure ...
	I1028 18:14:57.188966   56823 start.go:241] waiting for startup goroutines ...
	I1028 18:14:57.188972   56823 start.go:246] waiting for cluster config update ...
	I1028 18:14:57.188980   56823 start.go:255] writing updated cluster config ...
	I1028 18:14:57.189229   56823 ssh_runner.go:195] Run: rm -f paused
	I1028 18:14:57.246388   56823 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:14:57.248072   56823 out.go:177] * Done! kubectl is now configured to use "pause-006166" cluster and "default" namespace by default
	I1028 18:14:56.823107   57381 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:14:56.973839   57381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:14:56.980577   57381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:14:56.980623   57381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:14:56.990163   57381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 18:14:56.990174   57381 start.go:495] detecting cgroup driver to use...
	I1028 18:14:56.990221   57381 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:14:57.006882   57381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:14:57.021323   57381 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:14:57.021375   57381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:14:57.035422   57381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:14:57.048838   57381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:14:57.184513   57381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:14:57.354826   57381 docker.go:233] disabling docker service ...
	I1028 18:14:57.354879   57381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:14:57.384723   57381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:14:57.402681   57381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:14:57.556823   57381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:14:53.285393   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.321550   57162 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I1028 18:14:53.321591   57162 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.321658   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:53.325327   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.343275   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.353589   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.374415   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.375166   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.377172   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 18:14:53.399939   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.401436   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.406425   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 18:14:53.406618   57162 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I1028 18:14:53.406655   57162 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.406686   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:53.524289   57162 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1028 18:14:53.524342   57162 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.524391   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:53.529247   57162 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1028 18:14:53.529327   57162 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.529383   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:53.538928   57162 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1028 18:14:53.538969   57162 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1028 18:14:53.539007   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:53.545887   57162 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I1028 18:14:53.545921   57162 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.545947   57162 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I1028 18:14:53.545964   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:53.545978   57162 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.545993   57162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I1028 18:14:53.546013   57162 ssh_runner.go:195] Run: which crictl
	I1028 18:14:53.546046   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.546091   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.546099   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.546139   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 18:14:53.618698   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.618741   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.618752   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.618796   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.618850   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 18:14:53.618922   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.718939   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 18:14:53.718971   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.719011   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 18:14:53.719051   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 18:14:53.719073   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.719106   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 18:14:53.800422   57162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1028 18:14:53.800522   57162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 18:14:53.807334   57162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I1028 18:14:53.807393   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 18:14:53.807418   57162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 18:14:53.807393   57162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1028 18:14:53.807460   57162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 18:14:53.807494   57162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 18:14:53.807502   57162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1028 18:14:53.811656   57162 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1028 18:14:53.811684   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I1028 18:14:53.848272   57162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1028 18:14:53.854866   57162 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1028 18:14:53.854896   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I1028 18:14:53.854967   57162 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1028 18:14:53.854999   57162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I1028 18:14:53.855061   57162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I1028 18:14:53.885695   57162 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 18:14:53.885765   57162 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1028 18:14:55.294236   57162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:14:56.878118   57162 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.992322941s)
	I1028 18:14:56.878155   57162 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1028 18:14:56.878156   57162 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.583882927s)
	I1028 18:14:56.878186   57162 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 18:14:56.878239   57162 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1028 18:14:57.218883   57162 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 18:14:57.218925   57162 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1028 18:14:57.218973   57162 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1028 18:14:57.706223   57381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:14:57.721852   57381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:14:57.744005   57381 download.go:107] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1028 18:14:58.466442   57381 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1028 18:14:58.466492   57381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:58.479023   57381 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:14:58.479073   57381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:58.494278   57381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:58.508172   57381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:14:58.520313   57381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:14:58.533478   57381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:14:58.543590   57381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:14:58.552865   57381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:14:58.686440   57381 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:14:58.907948   57381 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:14:58.908009   57381 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:14:58.914343   57381 start.go:563] Will wait 60s for crictl version
	I1028 18:14:58.914400   57381 ssh_runner.go:195] Run: which crictl
	I1028 18:14:58.919381   57381 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:14:58.965439   57381 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:14:58.965496   57381 ssh_runner.go:195] Run: crio --version
	I1028 18:14:58.998448   57381 ssh_runner.go:195] Run: crio --version
	I1028 18:14:59.037264   57381 out.go:177] * Preparing CRI-O 1.29.1 ...
	I1028 18:14:59.039021   57381 ssh_runner.go:195] Run: rm -f paused
	I1028 18:14:59.045231   57381 out.go:177] * Done! minikube is ready without Kubernetes!
	I1028 18:14:59.047906   57381 out.go:201] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.497253076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b72b9673-6693-4bd0-891f-b52a07374b8f name=/runtime.v1.RuntimeService/Version
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.498964664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d2132c2-c20b-44cd-b00a-df798dc49441 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.499495905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139300499459951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d2132c2-c20b-44cd-b00a-df798dc49441 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.500207544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19a3ccc8-3b75-4ff6-b54c-f1996f83abd8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.500304881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19a3ccc8-3b75-4ff6-b54c-f1996f83abd8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.500749703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19a3ccc8-3b75-4ff6-b54c-f1996f83abd8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.546309840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63a515bc-84fa-45c4-9ef7-da452d66375c name=/runtime.v1.RuntimeService/Version
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.546400575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63a515bc-84fa-45c4-9ef7-da452d66375c name=/runtime.v1.RuntimeService/Version
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.547634445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f5adec0-9c6e-421c-abd1-440149e08f98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.548272897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139300548237454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f5adec0-9c6e-421c-abd1-440149e08f98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.549020140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cc3f6b5-2932-4224-a6b1-5c920a7a5bc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.549089304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cc3f6b5-2932-4224-a6b1-5c920a7a5bc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.549329971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cc3f6b5-2932-4224-a6b1-5c920a7a5bc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.596490505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4cdac34-6a67-4bd3-997b-aa672c241467 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.596742286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4cdac34-6a67-4bd3-997b-aa672c241467 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.597988640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02c1b4a3-ae9c-481a-b4ac-ae73b162d8fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.598368176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139300598343634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02c1b4a3-ae9c-481a-b4ac-ae73b162d8fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.599229146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89345a71-9131-409f-9ea8-50b0fa058336 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.599303993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89345a71-9131-409f-9ea8-50b0fa058336 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.599564541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89345a71-9131-409f-9ea8-50b0fa058336 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.608115551Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=48341ad3-f0ec-45cf-bd49-d9a5dd70deb5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.608435455Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&PodSandboxMetadata{Name:etcd-pause-006166,Uid:d682fe0f9537f6c2b87455ddc68feea2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1730139272612486472,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.105:2379,kubernetes.io/config.hash: d682fe0f9537f6c2b87455ddc68feea2,kubernetes.io/config.seen: 2024-10-28T18:13:49.889878888Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&PodSan
dboxMetadata{Name:kube-apiserver-pause-006166,Uid:f456cd75b1fcf480707382b157a813e5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1730139272559816624,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.105:8443,kubernetes.io/config.hash: f456cd75b1fcf480707382b157a813e5,kubernetes.io/config.seen: 2024-10-28T18:13:49.889882044Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-006166,Uid:15f6c19a5de0554106294d1ab48c014e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1730139272554968317,Labels:map[string]string{component: kube-scheduler,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15f6c19a5de0554106294d1ab48c014e,kubernetes.io/config.seen: 2024-10-28T18:13:49.889883950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-g4r99,Uid:9b3280f5-4031-4d12-ba29-18994efa2753,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1730139272553214275,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:13:55.005082559Z,kubern
etes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-006166,Uid:2c08a24ff9234ee623f85509a667edf2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1730139272551825026,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2c08a24ff9234ee623f85509a667edf2,kubernetes.io/config.seen: 2024-10-28T18:13:49.889883129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&PodSandboxMetadata{Name:kube-proxy-5psrd,Uid:1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,Cre
atedAt:1730139272531531460,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:13:54.804253203Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&PodSandboxMetadata{Name:etcd-pause-006166,Uid:d682fe0f9537f6c2b87455ddc68feea2,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1730139269909643952,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-url
s: https://192.168.61.105:2379,kubernetes.io/config.hash: d682fe0f9537f6c2b87455ddc68feea2,kubernetes.io/config.seen: 2024-10-28T18:13:49.889878888Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-006166,Uid:15f6c19a5de0554106294d1ab48c014e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1730139269902484049,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15f6c19a5de0554106294d1ab48c014e,kubernetes.io/config.seen: 2024-10-28T18:13:49.889883950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&PodSand
boxMetadata{Name:kube-controller-manager-pause-006166,Uid:2c08a24ff9234ee623f85509a667edf2,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1730139269887992559,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2c08a24ff9234ee623f85509a667edf2,kubernetes.io/config.seen: 2024-10-28T18:13:49.889883129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-g4r99,Uid:9b3280f5-4031-4d12-ba29-18994efa2753,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1730139269873090197,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-g
4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:13:55.005082559Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&PodSandboxMetadata{Name:kube-proxy-5psrd,Uid:1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1730139269865648241,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:13:54.804253203Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cfc575eea00d13c0243a89d4
d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-006166,Uid:f456cd75b1fcf480707382b157a813e5,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1730139269429155139,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.105:8443,kubernetes.io/config.hash: f456cd75b1fcf480707382b157a813e5,kubernetes.io/config.seen: 2024-10-28T18:13:49.889882044Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=48341ad3-f0ec-45cf-bd49-d9a5dd70deb5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.609754987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8df31115-c388-47ec-b4c3-eb8a02075724 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.609829952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8df31115-c388-47ec-b4c3-eb8a02075724 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:15:00 pause-006166 crio[2614]: time="2024-10-28 18:15:00.610100601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380,PodSandboxId:b04fb54c2dabc1e6cada6f3c2ac5028e662dabc1e9f188c506144b5893b33886,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730139280653923931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f,PodSandboxId:9b4ebd81f9dfacf8e5d799fd6203efec20928d940725fa7f3442d17dcc080a04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730139280640181862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd,PodSandboxId:63691074101def638080da051e31efe15278dcc1ee500213d94fd0c8e4d602f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730139275855582721,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652,PodSandboxId:88868adfab974b085d7b2c45cde023c7f399479fcfc3d00b23a75d5e824a6403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730139275879769180,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]
string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0,PodSandboxId:94475506e692580976c025c434254aca5a73becd5becaf7416e32cc80afd4b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730139275864456413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernet
es.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775,PodSandboxId:fe341f09646618464cdaa6017a545896a4d17900993cdd34fac651e656925d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730139275830373427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io
.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9,PodSandboxId:0db28c281ef5aa4fb65be1ed8cd1471b38146d6e8c782d7a900f4ff128beb1a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730139270593878488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5psrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187
fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8,PodSandboxId:8bde608dff59c4a2571a09550748218d217cf444fe8bdeb005d39a06f84a9e09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730139270683373289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f6c19a5de0554106294d1ab48c014e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76,PodSandboxId:d8bf458555813e026bf8179a2106fa80de47a043f7bdf8ea40a320254bb08c27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730139270690509191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g4r99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b3280f5-4031-4d12-ba29-18994efa2753,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\
",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9,PodSandboxId:0f58c1ff5c9d0f7d8f7236c67344033410d93ab58efe4782c597756e749dae0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730139270621192277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-006166,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d682fe0f9537f6c2b87455ddc68feea2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c,PodSandboxId:93e69e8607815abdc7f61338b647b56c773864bb7580d93a71437c867909168a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730139270476930588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-006166,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 2c08a24ff9234ee623f85509a667edf2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7,PodSandboxId:0cfc575eea00d13c0243a89d4d34d23e37bbc5f590133a4441b158ab563f7f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730139270091031657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-006166,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f456cd75b1fcf480707382b157a813e5,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8df31115-c388-47ec-b4c3-eb8a02075724 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cd9b8c69bb7c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   20 seconds ago      Running             coredns                   2                   b04fb54c2dabc       coredns-7c65d6cfc9-g4r99
	8229a7929ab0c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   20 seconds ago      Running             kube-proxy                2                   9b4ebd81f9dfa       kube-proxy-5psrd
	9c211dfcb5657       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   24 seconds ago      Running             kube-apiserver            2                   88868adfab974       kube-apiserver-pause-006166
	caf86c1171b94       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   24 seconds ago      Running             kube-scheduler            2                   94475506e6925       kube-scheduler-pause-006166
	32b777c4c83c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago      Running             etcd                      2                   63691074101de       etcd-pause-006166
	9c7209ab2a7ee       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   24 seconds ago      Running             kube-controller-manager   2                   fe341f0964661       kube-controller-manager-pause-006166
	d7f73994320d1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago      Exited              coredns                   1                   d8bf458555813       coredns-7c65d6cfc9-g4r99
	e9a91087ea6b2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   30 seconds ago      Exited              kube-scheduler            1                   8bde608dff59c       kube-scheduler-pause-006166
	a935a4471c892       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   30 seconds ago      Exited              etcd                      1                   0f58c1ff5c9d0       etcd-pause-006166
	c1b5dc9bf9cc2       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   30 seconds ago      Exited              kube-proxy                1                   0db28c281ef5a       kube-proxy-5psrd
	8f31cf1ecf742       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   30 seconds ago      Exited              kube-controller-manager   1                   93e69e8607815       kube-controller-manager-pause-006166
	ae29799650d92       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   30 seconds ago      Exited              kube-apiserver            1                   0cfc575eea00d       kube-apiserver-pause-006166
	
	
	==> coredns [0cd9b8c69bb7ccd76aef41565d787dfa6fc5972475ca9a700fd656a0f7330380] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35445 - 41977 "HINFO IN 1385094499035695066.4011758787778453954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00772471s
	
	
	==> coredns [d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76] <==
	
	
	==> describe nodes <==
	Name:               pause-006166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-006166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=pause-006166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_13_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-006166
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:14:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:14:39 +0000   Mon, 28 Oct 2024 18:13:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.105
	  Hostname:    pause-006166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 987d9680a6d1441092630e72d82ce270
	  System UUID:                987d9680-a6d1-4410-9263-0e72d82ce270
	  Boot ID:                    476a1be5-a37f-4aa5-9fb8-cc800e8f881b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-g4r99                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     66s
	  kube-system                 etcd-pause-006166                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         70s
	  kube-system                 kube-apiserver-pause-006166             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-pause-006166    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-5psrd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-pause-006166             100m (5%)     0 (0%)      0 (0%)           0 (0%)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     70s                kubelet          Node pause-006166 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node pause-006166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node pause-006166 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                69s                kubelet          Node pause-006166 status is now: NodeReady
	  Normal  RegisteredNode           67s                node-controller  Node pause-006166 event: Registered Node pause-006166 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-006166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-006166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-006166 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-006166 event: Registered Node pause-006166 in Controller
	
	
	==> dmesg <==
	[  +8.475211] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057378] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061230] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.218895] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.135874] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.302052] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.156057] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.579715] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.065215] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.484338] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.109612] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.728043] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +0.673899] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 18:14] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.321084] systemd-fstab-generator[2076]: Ignoring "noauto" option for root device
	[  +0.144045] systemd-fstab-generator[2094]: Ignoring "noauto" option for root device
	[  +0.229027] systemd-fstab-generator[2118]: Ignoring "noauto" option for root device
	[  +0.226231] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.534761] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +1.194438] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +3.219710] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[  +0.076388] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.529833] kauditd_printk_skb: 38 callbacks suppressed
	[ +14.362079] systemd-fstab-generator[3691]: Ignoring "noauto" option for root device
	[  +0.083438] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [32b777c4c83c7738bf9b2350088aa25743cda7dc9add9aadc787ed82d99580cd] <==
	{"level":"info","ts":"2024-10-28T18:14:37.927093Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:14:37.927137Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T18:14:37.927269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:14:37.928325Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:14:37.928412Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:14:37.929464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.105:2379"}
	{"level":"info","ts":"2024-10-28T18:14:37.929759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-10-28T18:14:55.616877Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.218337894s","expected-duration":"1s"}
	{"level":"warn","ts":"2024-10-28T18:14:55.617086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.218584626s","expected-duration":"100ms","prefix":"","request":"header:<ID:4108851675723447627 > lease_revoke:<id:390592d4557a8fdd>","response":"size:28"}
	{"level":"warn","ts":"2024-10-28T18:14:55.743313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.105335ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4108851675723447628 > lease_revoke:<id:390592d4557a8fbc>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T18:14:55.743426Z","caller":"traceutil/trace.go:171","msg":"trace[992174360] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:491; }","duration":"1.439412503s","start":"2024-10-28T18:14:54.304004Z","end":"2024-10-28T18:14:55.743416Z","steps":["trace[992174360] 'read index received'  (duration: 94.705615ms)","trace[992174360] 'applied index is now lower than readState.Index'  (duration: 1.344706154s)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T18:14:55.743510Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.439501165s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:14:55.743547Z","caller":"traceutil/trace.go:171","msg":"trace[257147906] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:458; }","duration":"1.439541164s","start":"2024-10-28T18:14:54.303996Z","end":"2024-10-28T18:14:55.743537Z","steps":["trace[257147906] 'agreement among raft nodes before linearized reading'  (duration: 1.439489955s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:55.743628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"798.573974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:14:55.743737Z","caller":"traceutil/trace.go:171","msg":"trace[300687829] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:458; }","duration":"798.696814ms","start":"2024-10-28T18:14:54.945032Z","end":"2024-10-28T18:14:55.743729Z","steps":["trace[300687829] 'agreement among raft nodes before linearized reading'  (duration: 798.542628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:55.744320Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:14:54.944996Z","time spent":"799.313162ms","remote":"127.0.0.1:55324","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-28T18:14:55.744092Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"567.17316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-006166\" ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2024-10-28T18:14:55.744858Z","caller":"traceutil/trace.go:171","msg":"trace[4375928] range","detail":"{range_begin:/registry/minions/pause-006166; range_end:; response_count:1; response_revision:458; }","duration":"567.93965ms","start":"2024-10-28T18:14:55.176910Z","end":"2024-10-28T18:14:55.744850Z","steps":["trace[4375928] 'agreement among raft nodes before linearized reading'  (duration: 567.066194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:55.744906Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:14:55.176875Z","time spent":"568.019666ms","remote":"127.0.0.1:55470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":1,"response size":5451,"request content":"key:\"/registry/minions/pause-006166\" "}
	{"level":"warn","ts":"2024-10-28T18:14:56.515289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.802277ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4108851675723447647 > lease_revoke:<id:390592d45643bc16>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T18:14:56.515456Z","caller":"traceutil/trace.go:171","msg":"trace[1108757725] linearizableReadLoop","detail":"{readStateIndex:494; appliedIndex:493; }","duration":"211.374336ms","start":"2024-10-28T18:14:56.304068Z","end":"2024-10-28T18:14:56.515443Z","steps":["trace[1108757725] 'read index received'  (duration: 65.370441ms)","trace[1108757725] 'applied index is now lower than readState.Index'  (duration: 146.002327ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T18:14:56.515569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.50378ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:14:56.515622Z","caller":"traceutil/trace.go:171","msg":"trace[1460295377] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:458; }","duration":"211.572692ms","start":"2024-10-28T18:14:56.304041Z","end":"2024-10-28T18:14:56.515614Z","steps":["trace[1460295377] 'agreement among raft nodes before linearized reading'  (duration: 211.480979ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:14:56.515589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.752301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-006166\" ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2024-10-28T18:14:56.515906Z","caller":"traceutil/trace.go:171","msg":"trace[413452735] range","detail":"{range_begin:/registry/minions/pause-006166; range_end:; response_count:1; response_revision:458; }","duration":"142.022143ms","start":"2024-10-28T18:14:56.373824Z","end":"2024-10-28T18:14:56.515847Z","steps":["trace[413452735] 'agreement among raft nodes before linearized reading'  (duration: 141.718084ms)"],"step_count":1}
	
	
	==> etcd [a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9] <==
	
	
	==> kernel <==
	 18:15:01 up 1 min,  0 users,  load average: 0.47, 0.19, 0.07
	Linux pause-006166 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9c211dfcb56575888962fa2b2e40c12f06ad39d950ac0fa6671506b56d93e652] <==
	I1028 18:14:39.344065       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1028 18:14:39.356413       1 aggregator.go:171] initial CRD sync complete...
	I1028 18:14:39.356452       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 18:14:39.356485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 18:14:39.381112       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 18:14:39.381157       1 policy_source.go:224] refreshing policies
	I1028 18:14:39.411271       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 18:14:39.417866       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 18:14:39.418207       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 18:14:39.418252       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 18:14:39.418379       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 18:14:39.418590       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 18:14:39.418778       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 18:14:39.420614       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 18:14:39.427181       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 18:14:39.461774       1 cache.go:39] Caches are synced for autoregister controller
	I1028 18:14:39.467012       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 18:14:40.338375       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 18:14:41.270052       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 18:14:41.288278       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 18:14:41.357158       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 18:14:41.412143       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 18:14:41.425523       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 18:14:42.884266       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 18:14:43.032276       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7] <==
	I1028 18:14:30.846297       1 options.go:228] external host was not specified, using 192.168.61.105
	I1028 18:14:30.860969       1 server.go:142] Version: v1.31.2
	I1028 18:14:30.861021       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c] <==
	
	
	==> kube-controller-manager [9c7209ab2a7eeb870fcc3434548dafeb687616b5e5fccc2c54e4292aba004775] <==
	I1028 18:14:42.741752       1 shared_informer.go:320] Caches are synced for crt configmap
	I1028 18:14:42.745929       1 shared_informer.go:320] Caches are synced for cronjob
	I1028 18:14:42.748340       1 shared_informer.go:320] Caches are synced for endpoint
	I1028 18:14:42.748756       1 shared_informer.go:320] Caches are synced for taint
	I1028 18:14:42.748866       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 18:14:42.749037       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-006166"
	I1028 18:14:42.749150       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 18:14:42.761407       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 18:14:42.765555       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 18:14:42.793915       1 shared_informer.go:320] Caches are synced for deployment
	I1028 18:14:42.801566       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1028 18:14:42.831005       1 shared_informer.go:320] Caches are synced for disruption
	I1028 18:14:42.840207       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 18:14:42.850908       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 18:14:42.856217       1 shared_informer.go:320] Caches are synced for ephemeral
	I1028 18:14:42.868744       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 18:14:42.879005       1 shared_informer.go:320] Caches are synced for expand
	I1028 18:14:42.917193       1 shared_informer.go:320] Caches are synced for PVC protection
	I1028 18:14:42.930538       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 18:14:42.948043       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 18:14:43.139428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="337.722803ms"
	I1028 18:14:43.139745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="98.709µs"
	I1028 18:14:43.394504       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 18:14:43.412908       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 18:14:43.413007       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [8229a7929ab0cbb7fee2119c22eeef1590722249b28f5c8bb380e6c6278f5f3f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:14:40.921357       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:14:40.938904       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.105"]
	E1028 18:14:40.939007       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:14:40.988313       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:14:40.988384       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:14:40.988418       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:14:40.991723       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:14:40.992069       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:14:40.992110       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:14:40.994384       1 config.go:199] "Starting service config controller"
	I1028 18:14:40.994650       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:14:40.994786       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:14:40.994819       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:14:40.995549       1 config.go:328] "Starting node config controller"
	I1028 18:14:40.995589       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:14:41.096120       1 shared_informer.go:320] Caches are synced for node config
	I1028 18:14:41.096299       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:14:41.096432       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9] <==
	
	
	==> kube-scheduler [caf86c1171b94644a319b775f133fe19177ced700566ddc1582578232d1f99f0] <==
	I1028 18:14:37.035435       1 serving.go:386] Generated self-signed cert in-memory
	W1028 18:14:39.363832       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 18:14:39.366200       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 18:14:39.366387       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 18:14:39.366414       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 18:14:39.393562       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 18:14:39.393643       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:14:39.397062       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 18:14:39.397108       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 18:14:39.397273       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 18:14:39.397362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 18:14:39.497497       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8] <==
	
	
	==> kubelet <==
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.706587    3249 kubelet_node_status.go:72] "Attempting to register node" node="pause-006166"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: E1028 18:14:35.707599    3249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.105:8443: connect: connection refused" node="pause-006166"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.810523    3249 scope.go:117] "RemoveContainer" containerID="a935a4471c8926807d5a0850fba3029e9996cf444e3b082e20f49093f9f812a9"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.811137    3249 scope.go:117] "RemoveContainer" containerID="ae29799650d92d6c611cb07e239aad0d28c1ad7f322874d830dbf8edba7c74e7"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.812798    3249 scope.go:117] "RemoveContainer" containerID="8f31cf1ecf742b8bbf405692c8c1edd50fad4866ca05d1deb55b139750f3ca1c"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: I1028 18:14:35.814632    3249 scope.go:117] "RemoveContainer" containerID="e9a91087ea6b25e072d85549262d00e50eb3b9278d793c38dbbf6372a8dba7e8"
	Oct 28 18:14:35 pause-006166 kubelet[3249]: E1028 18:14:35.929477    3249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-006166?timeout=10s\": dial tcp 192.168.61.105:8443: connect: connection refused" interval="800ms"
	Oct 28 18:14:36 pause-006166 kubelet[3249]: I1028 18:14:36.109202    3249 kubelet_node_status.go:72] "Attempting to register node" node="pause-006166"
	Oct 28 18:14:36 pause-006166 kubelet[3249]: E1028 18:14:36.111570    3249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.105:8443: connect: connection refused" node="pause-006166"
	Oct 28 18:14:36 pause-006166 kubelet[3249]: I1028 18:14:36.913840    3249 kubelet_node_status.go:72] "Attempting to register node" node="pause-006166"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.460628    3249 kubelet_node_status.go:111] "Node was previously registered" node="pause-006166"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.460854    3249 kubelet_node_status.go:75] "Successfully registered node" node="pause-006166"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.460880    3249 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 18:14:39 pause-006166 kubelet[3249]: I1028 18:14:39.462148    3249 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.303269    3249 apiserver.go:52] "Watching apiserver"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.317577    3249 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.376849    3249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3-xtables-lock\") pod \"kube-proxy-5psrd\" (UID: \"1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3\") " pod="kube-system/kube-proxy-5psrd"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.376957    3249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3-lib-modules\") pod \"kube-proxy-5psrd\" (UID: \"1ea9479e-a8e7-4db8-b384-7f9e1ba8f3f3\") " pod="kube-system/kube-proxy-5psrd"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.609087    3249 scope.go:117] "RemoveContainer" containerID="d7f73994320d1cf4467d6fb5c633f068b42c9e561a704bec5efafc4f229cfa76"
	Oct 28 18:14:40 pause-006166 kubelet[3249]: I1028 18:14:40.610377    3249 scope.go:117] "RemoveContainer" containerID="c1b5dc9bf9cc22509b710d1c06c50b947cc09282f0142bfc2b873e1853f4d8c9"
	Oct 28 18:14:42 pause-006166 kubelet[3249]: I1028 18:14:42.573104    3249 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 28 18:14:45 pause-006166 kubelet[3249]: E1028 18:14:45.406541    3249 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139285406301822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:14:45 pause-006166 kubelet[3249]: E1028 18:14:45.406587    3249 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139285406301822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:14:55 pause-006166 kubelet[3249]: E1028 18:14:55.407717    3249 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139295407312375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:14:55 pause-006166 kubelet[3249]: E1028 18:14:55.407742    3249 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730139295407312375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-006166 -n pause-006166
helpers_test.go:261: (dbg) Run:  kubectl --context pause-006166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (59.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (273.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-223868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1028 18:18:21.465482   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-223868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m33.204696732s)

                                                
                                                
-- stdout --
	* [old-k8s-version-223868] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-223868" primary control-plane node in "old-k8s-version-223868" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:18:06.829417   63347 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:18:06.829546   63347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:18:06.829555   63347 out.go:358] Setting ErrFile to fd 2...
	I1028 18:18:06.829561   63347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:18:06.829729   63347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:18:06.830325   63347 out.go:352] Setting JSON to false
	I1028 18:18:06.831246   63347 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7230,"bootTime":1730132257,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:18:06.831340   63347 start.go:139] virtualization: kvm guest
	I1028 18:18:06.833352   63347 out.go:177] * [old-k8s-version-223868] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:18:06.834625   63347 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:18:06.834620   63347 notify.go:220] Checking for updates...
	I1028 18:18:06.835979   63347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:18:06.837308   63347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:18:06.838766   63347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:18:06.840066   63347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:18:06.841355   63347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:18:06.843165   63347 config.go:182] Loaded profile config "cert-expiration-559364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:18:06.843310   63347 config.go:182] Loaded profile config "kubernetes-upgrade-192352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:18:06.843442   63347 config.go:182] Loaded profile config "running-upgrade-703793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 18:18:06.843532   63347 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:18:06.882000   63347 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 18:18:06.883354   63347 start.go:297] selected driver: kvm2
	I1028 18:18:06.883368   63347 start.go:901] validating driver "kvm2" against <nil>
	I1028 18:18:06.883378   63347 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:18:06.884151   63347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:18:06.884234   63347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:18:06.900538   63347 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:18:06.900580   63347 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 18:18:06.900800   63347 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:18:06.900828   63347 cni.go:84] Creating CNI manager for ""
	I1028 18:18:06.900889   63347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:18:06.900897   63347 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 18:18:06.900947   63347 start.go:340] cluster config:
	{Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:18:06.901041   63347 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:18:06.902720   63347 out.go:177] * Starting "old-k8s-version-223868" primary control-plane node in "old-k8s-version-223868" cluster
	I1028 18:18:06.904018   63347 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:18:06.904047   63347 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 18:18:06.904054   63347 cache.go:56] Caching tarball of preloaded images
	I1028 18:18:06.904118   63347 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:18:06.904134   63347 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 18:18:06.904211   63347 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:18:06.904228   63347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json: {Name:mk36a2a8cdd3a35d91b2b780c23f459b3940f863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:06.904348   63347 start.go:360] acquireMachinesLock for old-k8s-version-223868: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:18:09.605969   63347 start.go:364] duration metric: took 2.70158446s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:18:09.606055   63347 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:18:09.606199   63347 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 18:18:09.608443   63347 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 18:18:09.608655   63347 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:18:09.608717   63347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:18:09.629187   63347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I1028 18:18:09.629643   63347 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:18:09.630407   63347 main.go:141] libmachine: Using API Version  1
	I1028 18:18:09.630431   63347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:18:09.630795   63347 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:18:09.631126   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:18:09.631279   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:09.631419   63347 start.go:159] libmachine.API.Create for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:18:09.631448   63347 client.go:168] LocalClient.Create starting
	I1028 18:18:09.631483   63347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 18:18:09.631528   63347 main.go:141] libmachine: Decoding PEM data...
	I1028 18:18:09.631551   63347 main.go:141] libmachine: Parsing certificate...
	I1028 18:18:09.631621   63347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 18:18:09.631648   63347 main.go:141] libmachine: Decoding PEM data...
	I1028 18:18:09.631661   63347 main.go:141] libmachine: Parsing certificate...
	I1028 18:18:09.631683   63347 main.go:141] libmachine: Running pre-create checks...
	I1028 18:18:09.631703   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .PreCreateCheck
	I1028 18:18:09.632108   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:18:09.632577   63347 main.go:141] libmachine: Creating machine...
	I1028 18:18:09.632590   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .Create
	I1028 18:18:09.632995   63347 main.go:141] libmachine: (old-k8s-version-223868) Creating KVM machine...
	I1028 18:18:09.634838   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found existing default KVM network
	I1028 18:18:09.635858   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:09.635672   63392 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:cf:44} reservation:<nil>}
	I1028 18:18:09.636985   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:09.636906   63392 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6d:4b:37} reservation:<nil>}
	I1028 18:18:09.638084   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:09.638015   63392 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0b:77:24} reservation:<nil>}
	I1028 18:18:09.640718   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:09.640551   63392 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1028 18:18:09.641974   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:09.641893   63392 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000384c30}
	I1028 18:18:09.641991   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | created network xml: 
	I1028 18:18:09.642010   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | <network>
	I1028 18:18:09.642018   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |   <name>mk-old-k8s-version-223868</name>
	I1028 18:18:09.642027   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |   <dns enable='no'/>
	I1028 18:18:09.642040   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |   
	I1028 18:18:09.642051   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I1028 18:18:09.642059   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |     <dhcp>
	I1028 18:18:09.642068   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I1028 18:18:09.642075   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |     </dhcp>
	I1028 18:18:09.642083   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |   </ip>
	I1028 18:18:09.642090   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG |   
	I1028 18:18:09.642096   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | </network>
	I1028 18:18:09.642103   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | 
	I1028 18:18:09.648755   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | trying to create private KVM network mk-old-k8s-version-223868 192.168.83.0/24...
	I1028 18:18:09.735892   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | private KVM network mk-old-k8s-version-223868 192.168.83.0/24 created
	I1028 18:18:09.735920   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:09.735860   63392 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:18:09.735932   63347 main.go:141] libmachine: (old-k8s-version-223868) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868 ...
	I1028 18:18:09.735963   63347 main.go:141] libmachine: (old-k8s-version-223868) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 18:18:09.736105   63347 main.go:141] libmachine: (old-k8s-version-223868) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 18:18:10.048715   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:10.048610   63392 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa...
	I1028 18:18:10.281618   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:10.281506   63392 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/old-k8s-version-223868.rawdisk...
	I1028 18:18:10.281690   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Writing magic tar header
	I1028 18:18:10.281717   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Writing SSH key tar header
	I1028 18:18:10.281879   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:10.281792   63392 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868 ...
	I1028 18:18:10.281965   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868
	I1028 18:18:10.282001   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 18:18:10.282012   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:18:10.282052   63347 main.go:141] libmachine: (old-k8s-version-223868) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868 (perms=drwx------)
	I1028 18:18:10.282084   63347 main.go:141] libmachine: (old-k8s-version-223868) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 18:18:10.282095   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 18:18:10.282111   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 18:18:10.282120   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Checking permissions on dir: /home/jenkins
	I1028 18:18:10.282129   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Checking permissions on dir: /home
	I1028 18:18:10.282140   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Skipping /home - not owner
	I1028 18:18:10.282152   63347 main.go:141] libmachine: (old-k8s-version-223868) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 18:18:10.282164   63347 main.go:141] libmachine: (old-k8s-version-223868) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 18:18:10.282179   63347 main.go:141] libmachine: (old-k8s-version-223868) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 18:18:10.282190   63347 main.go:141] libmachine: (old-k8s-version-223868) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 18:18:10.282200   63347 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:18:10.284088   63347 main.go:141] libmachine: (old-k8s-version-223868) define libvirt domain using xml: 
	I1028 18:18:10.284106   63347 main.go:141] libmachine: (old-k8s-version-223868) <domain type='kvm'>
	I1028 18:18:10.284117   63347 main.go:141] libmachine: (old-k8s-version-223868)   <name>old-k8s-version-223868</name>
	I1028 18:18:10.284125   63347 main.go:141] libmachine: (old-k8s-version-223868)   <memory unit='MiB'>2200</memory>
	I1028 18:18:10.284133   63347 main.go:141] libmachine: (old-k8s-version-223868)   <vcpu>2</vcpu>
	I1028 18:18:10.284150   63347 main.go:141] libmachine: (old-k8s-version-223868)   <features>
	I1028 18:18:10.284159   63347 main.go:141] libmachine: (old-k8s-version-223868)     <acpi/>
	I1028 18:18:10.284166   63347 main.go:141] libmachine: (old-k8s-version-223868)     <apic/>
	I1028 18:18:10.284174   63347 main.go:141] libmachine: (old-k8s-version-223868)     <pae/>
	I1028 18:18:10.284181   63347 main.go:141] libmachine: (old-k8s-version-223868)     
	I1028 18:18:10.284189   63347 main.go:141] libmachine: (old-k8s-version-223868)   </features>
	I1028 18:18:10.284196   63347 main.go:141] libmachine: (old-k8s-version-223868)   <cpu mode='host-passthrough'>
	I1028 18:18:10.284223   63347 main.go:141] libmachine: (old-k8s-version-223868)   
	I1028 18:18:10.284230   63347 main.go:141] libmachine: (old-k8s-version-223868)   </cpu>
	I1028 18:18:10.284238   63347 main.go:141] libmachine: (old-k8s-version-223868)   <os>
	I1028 18:18:10.284245   63347 main.go:141] libmachine: (old-k8s-version-223868)     <type>hvm</type>
	I1028 18:18:10.284256   63347 main.go:141] libmachine: (old-k8s-version-223868)     <boot dev='cdrom'/>
	I1028 18:18:10.284262   63347 main.go:141] libmachine: (old-k8s-version-223868)     <boot dev='hd'/>
	I1028 18:18:10.284271   63347 main.go:141] libmachine: (old-k8s-version-223868)     <bootmenu enable='no'/>
	I1028 18:18:10.284277   63347 main.go:141] libmachine: (old-k8s-version-223868)   </os>
	I1028 18:18:10.284284   63347 main.go:141] libmachine: (old-k8s-version-223868)   <devices>
	I1028 18:18:10.284291   63347 main.go:141] libmachine: (old-k8s-version-223868)     <disk type='file' device='cdrom'>
	I1028 18:18:10.284303   63347 main.go:141] libmachine: (old-k8s-version-223868)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/boot2docker.iso'/>
	I1028 18:18:10.284311   63347 main.go:141] libmachine: (old-k8s-version-223868)       <target dev='hdc' bus='scsi'/>
	I1028 18:18:10.284320   63347 main.go:141] libmachine: (old-k8s-version-223868)       <readonly/>
	I1028 18:18:10.284326   63347 main.go:141] libmachine: (old-k8s-version-223868)     </disk>
	I1028 18:18:10.284335   63347 main.go:141] libmachine: (old-k8s-version-223868)     <disk type='file' device='disk'>
	I1028 18:18:10.284344   63347 main.go:141] libmachine: (old-k8s-version-223868)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 18:18:10.284357   63347 main.go:141] libmachine: (old-k8s-version-223868)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/old-k8s-version-223868.rawdisk'/>
	I1028 18:18:10.284365   63347 main.go:141] libmachine: (old-k8s-version-223868)       <target dev='hda' bus='virtio'/>
	I1028 18:18:10.284372   63347 main.go:141] libmachine: (old-k8s-version-223868)     </disk>
	I1028 18:18:10.284387   63347 main.go:141] libmachine: (old-k8s-version-223868)     <interface type='network'>
	I1028 18:18:10.284395   63347 main.go:141] libmachine: (old-k8s-version-223868)       <source network='mk-old-k8s-version-223868'/>
	I1028 18:18:10.284402   63347 main.go:141] libmachine: (old-k8s-version-223868)       <model type='virtio'/>
	I1028 18:18:10.284409   63347 main.go:141] libmachine: (old-k8s-version-223868)     </interface>
	I1028 18:18:10.284416   63347 main.go:141] libmachine: (old-k8s-version-223868)     <interface type='network'>
	I1028 18:18:10.284425   63347 main.go:141] libmachine: (old-k8s-version-223868)       <source network='default'/>
	I1028 18:18:10.284431   63347 main.go:141] libmachine: (old-k8s-version-223868)       <model type='virtio'/>
	I1028 18:18:10.284441   63347 main.go:141] libmachine: (old-k8s-version-223868)     </interface>
	I1028 18:18:10.284448   63347 main.go:141] libmachine: (old-k8s-version-223868)     <serial type='pty'>
	I1028 18:18:10.284455   63347 main.go:141] libmachine: (old-k8s-version-223868)       <target port='0'/>
	I1028 18:18:10.284461   63347 main.go:141] libmachine: (old-k8s-version-223868)     </serial>
	I1028 18:18:10.284493   63347 main.go:141] libmachine: (old-k8s-version-223868)     <console type='pty'>
	I1028 18:18:10.284502   63347 main.go:141] libmachine: (old-k8s-version-223868)       <target type='serial' port='0'/>
	I1028 18:18:10.284509   63347 main.go:141] libmachine: (old-k8s-version-223868)     </console>
	I1028 18:18:10.284523   63347 main.go:141] libmachine: (old-k8s-version-223868)     <rng model='virtio'>
	I1028 18:18:10.284535   63347 main.go:141] libmachine: (old-k8s-version-223868)       <backend model='random'>/dev/random</backend>
	I1028 18:18:10.284541   63347 main.go:141] libmachine: (old-k8s-version-223868)     </rng>
	I1028 18:18:10.284548   63347 main.go:141] libmachine: (old-k8s-version-223868)     
	I1028 18:18:10.284558   63347 main.go:141] libmachine: (old-k8s-version-223868)     
	I1028 18:18:10.284567   63347 main.go:141] libmachine: (old-k8s-version-223868)   </devices>
	I1028 18:18:10.284577   63347 main.go:141] libmachine: (old-k8s-version-223868) </domain>
	I1028 18:18:10.284591   63347 main.go:141] libmachine: (old-k8s-version-223868) 
	I1028 18:18:10.289674   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:3a:8e:68 in network default
	I1028 18:18:10.290501   63347 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:18:10.290536   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:10.291382   63347 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:18:10.291760   63347 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:18:10.292655   63347 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:18:10.293624   63347 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:18:11.774019   63347 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:18:11.775153   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:11.775720   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:11.775748   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:11.775695   63392 retry.go:31] will retry after 215.196239ms: waiting for machine to come up
	I1028 18:18:11.992388   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:11.993194   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:11.993226   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:11.993149   63392 retry.go:31] will retry after 297.587037ms: waiting for machine to come up
	I1028 18:18:12.293680   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:12.294287   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:12.294314   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:12.294246   63392 retry.go:31] will retry after 340.197462ms: waiting for machine to come up
	I1028 18:18:12.637130   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:12.637828   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:12.637858   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:12.637726   63392 retry.go:31] will retry after 559.36597ms: waiting for machine to come up
	I1028 18:18:13.199226   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:13.199799   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:13.199820   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:13.199754   63392 retry.go:31] will retry after 546.064322ms: waiting for machine to come up
	I1028 18:18:13.747170   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:13.747781   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:13.747804   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:13.747721   63392 retry.go:31] will retry after 926.86128ms: waiting for machine to come up
	I1028 18:18:14.675935   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:14.676404   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:14.676425   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:14.676354   63392 retry.go:31] will retry after 760.253643ms: waiting for machine to come up
	I1028 18:18:15.437822   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:15.438329   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:15.438354   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:15.438285   63392 retry.go:31] will retry after 1.399973235s: waiting for machine to come up
	I1028 18:18:16.839739   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:16.840363   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:16.840393   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:16.840303   63392 retry.go:31] will retry after 1.321487048s: waiting for machine to come up
	I1028 18:18:18.163138   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:18.163598   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:18.163632   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:18.163551   63392 retry.go:31] will retry after 1.503721229s: waiting for machine to come up
	I1028 18:18:19.669390   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:19.669873   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:19.669924   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:19.669818   63392 retry.go:31] will retry after 1.999765422s: waiting for machine to come up
	I1028 18:18:21.671144   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:21.671706   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:21.671735   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:21.671644   63392 retry.go:31] will retry after 3.43940399s: waiting for machine to come up
	I1028 18:18:25.112862   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:25.113441   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:25.113467   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:25.113393   63392 retry.go:31] will retry after 4.03360921s: waiting for machine to come up
	I1028 18:18:29.149871   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:29.150327   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:18:29.150353   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:18:29.150295   63392 retry.go:31] will retry after 4.431663968s: waiting for machine to come up
	I1028 18:18:33.586090   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.586553   63347 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:18:33.586602   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.586614   63347 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:18:33.586966   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868
	I1028 18:18:33.659377   63347 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:18:33.659408   63347 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:18:33.659417   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:18:33.661676   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.662144   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:33.662186   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.662370   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:18:33.662391   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:18:33.662413   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:18:33.662429   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:18:33.662459   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:18:33.788728   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:18:33.789024   63347 main.go:141] libmachine: (old-k8s-version-223868) KVM machine creation complete!
	I1028 18:18:33.789360   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:18:33.789894   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:33.790098   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:33.790267   63347 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 18:18:33.790282   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:18:33.791429   63347 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 18:18:33.791443   63347 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 18:18:33.791451   63347 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 18:18:33.791458   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:33.793401   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.793742   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:33.793767   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.793913   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:33.794099   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:33.794255   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:33.794382   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:33.794543   63347 main.go:141] libmachine: Using SSH client type: native
	I1028 18:18:33.794785   63347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:18:33.794804   63347 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 18:18:33.899515   63347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:18:33.899536   63347 main.go:141] libmachine: Detecting the provisioner...
	I1028 18:18:33.899544   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:33.902110   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.902404   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:33.902443   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:33.902520   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:33.902776   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:33.902974   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:33.903132   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:33.903297   63347 main.go:141] libmachine: Using SSH client type: native
	I1028 18:18:33.903463   63347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:18:33.903473   63347 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 18:18:34.009145   63347 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 18:18:34.009256   63347 main.go:141] libmachine: found compatible host: buildroot
	I1028 18:18:34.009272   63347 main.go:141] libmachine: Provisioning with buildroot...
	I1028 18:18:34.009283   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:18:34.009545   63347 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:18:34.009569   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:18:34.009752   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:34.012509   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.012884   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.012912   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.013075   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:34.013240   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.013353   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.013442   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:34.013638   63347 main.go:141] libmachine: Using SSH client type: native
	I1028 18:18:34.013821   63347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:18:34.013833   63347 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:18:34.134999   63347 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:18:34.135030   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:34.137897   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.138342   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.138372   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.138505   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:34.138742   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.138933   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.139113   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:34.139277   63347 main.go:141] libmachine: Using SSH client type: native
	I1028 18:18:34.139461   63347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:18:34.139486   63347 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:18:34.258969   63347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:18:34.258994   63347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:18:34.259042   63347 buildroot.go:174] setting up certificates
	I1028 18:18:34.259060   63347 provision.go:84] configureAuth start
	I1028 18:18:34.259077   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:18:34.259327   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:18:34.261947   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.262338   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.262379   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.262518   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:34.264694   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.264973   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.265001   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.265149   63347 provision.go:143] copyHostCerts
	I1028 18:18:34.265209   63347 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:18:34.265222   63347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:18:34.265277   63347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:18:34.265385   63347 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:18:34.265397   63347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:18:34.265428   63347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:18:34.265533   63347 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:18:34.265545   63347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:18:34.265571   63347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:18:34.265637   63347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:18:34.479073   63347 provision.go:177] copyRemoteCerts
	I1028 18:18:34.479132   63347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:18:34.479154   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:34.481766   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.482134   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.482153   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.482314   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:34.482521   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.482708   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:34.482859   63347 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:18:34.570945   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:18:34.597325   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:18:34.623812   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:18:34.647206   63347 provision.go:87] duration metric: took 388.1323ms to configureAuth
	I1028 18:18:34.647237   63347 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:18:34.647437   63347 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:18:34.647506   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:34.650112   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.650485   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.650510   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.650687   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:34.650876   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.651040   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.651166   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:34.651338   63347 main.go:141] libmachine: Using SSH client type: native
	I1028 18:18:34.651526   63347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:18:34.651543   63347 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:18:34.877584   63347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:18:34.877610   63347 main.go:141] libmachine: Checking connection to Docker...
	I1028 18:18:34.877621   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetURL
	I1028 18:18:34.878821   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using libvirt version 6000000
	I1028 18:18:34.881029   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.881389   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.881419   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.881609   63347 main.go:141] libmachine: Docker is up and running!
	I1028 18:18:34.881629   63347 main.go:141] libmachine: Reticulating splines...
	I1028 18:18:34.881635   63347 client.go:171] duration metric: took 25.250178021s to LocalClient.Create
	I1028 18:18:34.881654   63347 start.go:167] duration metric: took 25.250238626s to libmachine.API.Create "old-k8s-version-223868"
	I1028 18:18:34.881664   63347 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:18:34.881673   63347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:18:34.881689   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:34.881931   63347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:18:34.881967   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:34.883869   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.884185   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:34.884217   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:34.884365   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:34.884560   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:34.884708   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:34.884853   63347 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:18:34.966498   63347 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:18:34.971046   63347 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:18:34.971066   63347 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:18:34.971127   63347 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:18:34.971204   63347 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:18:34.971296   63347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:18:34.980658   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:18:35.004025   63347 start.go:296] duration metric: took 122.347809ms for postStartSetup
	I1028 18:18:35.004074   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:18:35.004698   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:18:35.007415   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.007734   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:35.007759   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.007989   63347 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:18:35.008154   63347 start.go:128] duration metric: took 25.40194226s to createHost
	I1028 18:18:35.008174   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:35.010182   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.010513   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:35.010537   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.010654   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:35.010837   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:35.010996   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:35.011126   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:35.011272   63347 main.go:141] libmachine: Using SSH client type: native
	I1028 18:18:35.011431   63347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:18:35.011443   63347 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:18:35.121214   63347 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730139515.091360133
	
	I1028 18:18:35.121235   63347 fix.go:216] guest clock: 1730139515.091360133
	I1028 18:18:35.121242   63347 fix.go:229] Guest: 2024-10-28 18:18:35.091360133 +0000 UTC Remote: 2024-10-28 18:18:35.008163993 +0000 UTC m=+28.214973759 (delta=83.19614ms)
	I1028 18:18:35.121260   63347 fix.go:200] guest clock delta is within tolerance: 83.19614ms
	I1028 18:18:35.121265   63347 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 25.515270613s
	I1028 18:18:35.121285   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:35.121537   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:18:35.124702   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.125138   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:35.125165   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.125333   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:35.125854   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:35.126069   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:18:35.126150   63347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:18:35.126198   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:35.126300   63347 ssh_runner.go:195] Run: cat /version.json
	I1028 18:18:35.126325   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:18:35.128883   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.128907   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.129279   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:35.129309   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.129339   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:35.129355   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:35.129479   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:35.129587   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:18:35.129666   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:35.129727   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:18:35.129871   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:35.129880   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:18:35.130051   63347 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:18:35.130112   63347 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:18:35.210046   63347 ssh_runner.go:195] Run: systemctl --version
	I1028 18:18:35.235027   63347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:18:35.391728   63347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:18:35.398228   63347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:18:35.398297   63347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:18:35.418399   63347 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:18:35.418422   63347 start.go:495] detecting cgroup driver to use...
	I1028 18:18:35.418490   63347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:18:35.437258   63347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:18:35.451040   63347 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:18:35.451102   63347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:18:35.465295   63347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:18:35.478223   63347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:18:35.595055   63347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:18:35.749855   63347 docker.go:233] disabling docker service ...
	I1028 18:18:35.749929   63347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:18:35.767849   63347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:18:35.786445   63347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:18:35.956792   63347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:18:36.110663   63347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:18:36.124990   63347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:18:36.150738   63347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:18:36.150808   63347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:18:36.162729   63347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:18:36.162791   63347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:18:36.173269   63347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:18:36.183405   63347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:18:36.193573   63347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:18:36.203939   63347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:18:36.213239   63347 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:18:36.213288   63347 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:18:36.226424   63347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:18:36.235626   63347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:18:36.352276   63347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:18:36.459195   63347 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:18:36.459275   63347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:18:36.464043   63347 start.go:563] Will wait 60s for crictl version
	I1028 18:18:36.464100   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:36.467936   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:18:36.513372   63347 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:18:36.513442   63347 ssh_runner.go:195] Run: crio --version
	I1028 18:18:36.544981   63347 ssh_runner.go:195] Run: crio --version
	I1028 18:18:36.575038   63347 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:18:36.576317   63347 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:18:36.578642   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:36.578971   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:18:26 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:18:36.579001   63347 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:18:36.579178   63347 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:18:36.583097   63347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:18:36.595531   63347 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:18:36.595626   63347 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:18:36.595676   63347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:18:36.627727   63347 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:18:36.627786   63347 ssh_runner.go:195] Run: which lz4
	I1028 18:18:36.631658   63347 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:18:36.635748   63347 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:18:36.635776   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:18:38.184050   63347 crio.go:462] duration metric: took 1.552412605s to copy over tarball
	I1028 18:18:38.184133   63347 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:18:40.666013   63347 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.481846091s)
	I1028 18:18:40.666050   63347 crio.go:469] duration metric: took 2.481969981s to extract the tarball
	I1028 18:18:40.666059   63347 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:18:40.711137   63347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:18:40.755384   63347 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:18:40.755408   63347 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:18:40.755485   63347 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:18:40.755491   63347 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:18:40.755542   63347 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:18:40.755543   63347 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:18:40.755566   63347 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:18:40.755572   63347 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:18:40.755520   63347 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:18:40.755608   63347 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:18:40.757310   63347 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:18:40.757319   63347 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:18:40.757331   63347 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:18:40.757362   63347 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:18:40.757310   63347 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:18:40.757403   63347 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:18:40.757310   63347 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:18:40.757315   63347 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:18:40.935184   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:18:40.942800   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:18:40.943698   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:18:40.949355   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:18:40.960733   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:18:40.976390   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:18:41.020163   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:18:41.023029   63347 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:18:41.023074   63347 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:18:41.023108   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:41.026742   63347 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:18:41.026781   63347 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:18:41.026809   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:41.091681   63347 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:18:41.091723   63347 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:18:41.091780   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:41.099129   63347 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:18:41.099173   63347 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:18:41.099225   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:41.099244   63347 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:18:41.099289   63347 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:18:41.099379   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:41.133469   63347 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:18:41.133514   63347 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:18:41.133561   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:41.143230   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:18:41.143247   63347 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:18:41.143277   63347 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:18:41.143287   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:18:41.143306   63347 ssh_runner.go:195] Run: which crictl
	I1028 18:18:41.143355   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:18:41.143402   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:18:41.143419   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:18:41.143434   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:18:41.282712   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:18:41.282793   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:18:41.282873   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:18:41.282895   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:18:41.282991   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:18:41.283025   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:18:41.392195   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:18:41.392228   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:18:41.443239   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:18:41.443321   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:18:41.443375   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:18:41.443412   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:18:41.443323   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:18:41.536895   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:18:41.537882   63347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:18:41.626750   63347 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:18:41.626836   63347 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:18:41.626862   63347 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:18:41.626945   63347 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:18:41.626977   63347 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:18:41.654663   63347 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:18:41.654803   63347 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:18:43.076969   63347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:18:43.222463   63347 cache_images.go:92] duration metric: took 2.467038612s to LoadCachedImages
	W1028 18:18:43.222537   63347 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1028 18:18:43.222553   63347 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:18:43.222660   63347 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:18:43.222730   63347 ssh_runner.go:195] Run: crio config
	I1028 18:18:43.282339   63347 cni.go:84] Creating CNI manager for ""
	I1028 18:18:43.282359   63347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:18:43.282368   63347 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:18:43.282386   63347 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:18:43.282500   63347 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:18:43.282556   63347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:18:43.294491   63347 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:18:43.294570   63347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:18:43.305838   63347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:18:43.324591   63347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:18:43.343100   63347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:18:43.363437   63347 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:18:43.367409   63347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:18:43.379803   63347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:18:43.495321   63347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:18:43.511608   63347 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:18:43.511635   63347 certs.go:194] generating shared ca certs ...
	I1028 18:18:43.511665   63347 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:43.511820   63347 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:18:43.511880   63347 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:18:43.511892   63347 certs.go:256] generating profile certs ...
	I1028 18:18:43.511961   63347 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:18:43.511976   63347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt with IP's: []
	I1028 18:18:43.651777   63347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt ...
	I1028 18:18:43.651803   63347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: {Name:mk71eb6754ad49f73881db1e9320e83292f5b764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:43.651978   63347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key ...
	I1028 18:18:43.651993   63347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key: {Name:mkb4f1422f9f3a258d452ebdacd4eff53754419e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:43.652099   63347 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:18:43.652116   63347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt.c3f44195 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.194]
	I1028 18:18:43.757068   63347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt.c3f44195 ...
	I1028 18:18:43.757099   63347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt.c3f44195: {Name:mkbe86198bfb396504def662cb6f673f01b99de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:43.757244   63347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195 ...
	I1028 18:18:43.757256   63347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195: {Name:mkeeea07b9c7c2a5db51481055becf6438df2621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:43.757333   63347 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt.c3f44195 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt
	I1028 18:18:43.757419   63347 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key
	I1028 18:18:43.757500   63347 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:18:43.757534   63347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt with IP's: []
	I1028 18:18:43.818429   63347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt ...
	I1028 18:18:43.818459   63347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt: {Name:mk9f8a5a15d3cc61f26ea24c06de8d9eca27e364 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:43.818614   63347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key ...
	I1028 18:18:43.818626   63347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key: {Name:mk2ec52ea833ca6ff02368ab88c1c2d99f30d7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:18:43.818830   63347 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:18:43.818899   63347 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:18:43.818914   63347 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:18:43.818946   63347 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:18:43.818980   63347 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:18:43.819014   63347 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:18:43.819078   63347 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:18:43.819919   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:18:43.847965   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:18:43.872077   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:18:43.895462   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:18:43.919016   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:18:43.946196   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:18:43.972565   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:18:43.998743   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:18:44.022687   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:18:44.046126   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:18:44.070752   63347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:18:44.095590   63347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:18:44.111984   63347 ssh_runner.go:195] Run: openssl version
	I1028 18:18:44.117667   63347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:18:44.128222   63347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:18:44.132659   63347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:18:44.132744   63347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:18:44.138490   63347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:18:44.148849   63347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:18:44.159175   63347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:18:44.163798   63347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:18:44.163869   63347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:18:44.169615   63347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:18:44.179931   63347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:18:44.191674   63347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:18:44.196450   63347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:18:44.196526   63347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:18:44.203030   63347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:18:44.213684   63347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:18:44.218025   63347 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 18:18:44.218083   63347 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:18:44.218148   63347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:18:44.218213   63347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:18:44.256113   63347 cri.go:89] found id: ""
	I1028 18:18:44.256191   63347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:18:44.266510   63347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:18:44.275556   63347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:18:44.284444   63347 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:18:44.284487   63347 kubeadm.go:157] found existing configuration files:
	
	I1028 18:18:44.284534   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:18:44.293319   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:18:44.293387   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:18:44.302256   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:18:44.311189   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:18:44.311241   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:18:44.326122   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:18:44.338797   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:18:44.338842   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:18:44.349419   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:18:44.362168   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:18:44.362226   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:18:44.379304   63347 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:18:44.528342   63347 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:18:44.528437   63347 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:18:44.692363   63347 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:18:44.692495   63347 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:18:44.692591   63347 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:18:44.897427   63347 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:18:45.045108   63347 out.go:235]   - Generating certificates and keys ...
	I1028 18:18:45.045260   63347 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:18:45.045318   63347 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:18:45.093940   63347 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 18:18:45.494496   63347 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 18:18:45.891824   63347 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 18:18:45.982792   63347 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 18:18:46.076983   63347 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 18:18:46.077199   63347 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-223868] and IPs [192.168.83.194 127.0.0.1 ::1]
	I1028 18:18:46.304897   63347 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 18:18:46.305246   63347 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-223868] and IPs [192.168.83.194 127.0.0.1 ::1]
	I1028 18:18:46.460633   63347 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 18:18:46.703880   63347 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 18:18:46.934966   63347 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 18:18:46.935329   63347 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:18:47.071923   63347 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:18:47.177304   63347 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:18:47.345389   63347 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:18:47.451839   63347 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:18:47.471193   63347 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:18:47.472460   63347 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:18:47.472556   63347 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:18:47.618536   63347 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:18:47.620512   63347 out.go:235]   - Booting up control plane ...
	I1028 18:18:47.620660   63347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:18:47.635081   63347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:18:47.636532   63347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:18:47.637719   63347 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:18:47.643617   63347 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:19:27.635663   63347 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:19:27.636060   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:19:27.636361   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:19:32.636372   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:19:32.636746   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:19:42.635477   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:19:42.635761   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:20:02.634807   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:20:02.635084   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:20:42.636386   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:20:42.636683   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:20:42.636707   63347 kubeadm.go:310] 
	I1028 18:20:42.636764   63347 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:20:42.636829   63347 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:20:42.636847   63347 kubeadm.go:310] 
	I1028 18:20:42.636889   63347 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:20:42.636944   63347 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:20:42.637066   63347 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:20:42.637082   63347 kubeadm.go:310] 
	I1028 18:20:42.637196   63347 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:20:42.637261   63347 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:20:42.637329   63347 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:20:42.637354   63347 kubeadm.go:310] 
	I1028 18:20:42.637516   63347 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:20:42.637629   63347 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:20:42.637640   63347 kubeadm.go:310] 
	I1028 18:20:42.637786   63347 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:20:42.637903   63347 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:20:42.638034   63347 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:20:42.638143   63347 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:20:42.638157   63347 kubeadm.go:310] 
	I1028 18:20:42.638423   63347 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:20:42.638543   63347 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:20:42.638698   63347 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1028 18:20:42.638749   63347 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-223868] and IPs [192.168.83.194 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-223868] and IPs [192.168.83.194 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-223868] and IPs [192.168.83.194 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-223868] and IPs [192.168.83.194 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:20:42.638793   63347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:20:43.113057   63347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:20:43.127402   63347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:20:43.137183   63347 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:20:43.137208   63347 kubeadm.go:157] found existing configuration files:
	
	I1028 18:20:43.137267   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:20:43.146412   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:20:43.146459   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:20:43.156119   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:20:43.165092   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:20:43.165147   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:20:43.174395   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:20:43.183107   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:20:43.183144   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:20:43.192190   63347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:20:43.201409   63347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:20:43.201452   63347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:20:43.211522   63347 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:20:43.289178   63347 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:20:43.289269   63347 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:20:43.433467   63347 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:20:43.433629   63347 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:20:43.433778   63347 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:20:43.610107   63347 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:20:43.611909   63347 out.go:235]   - Generating certificates and keys ...
	I1028 18:20:43.612000   63347 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:20:43.612104   63347 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:20:43.612223   63347 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:20:43.612307   63347 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:20:43.612403   63347 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:20:43.612491   63347 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:20:43.612581   63347 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:20:43.612853   63347 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:20:43.613137   63347 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:20:43.613488   63347 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:20:43.613552   63347 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:20:43.613648   63347 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:20:43.929346   63347 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:20:44.013545   63347 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:20:44.074701   63347 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:20:44.201810   63347 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:20:44.227322   63347 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:20:44.227465   63347 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:20:44.227522   63347 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:20:44.378127   63347 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:20:44.379939   63347 out.go:235]   - Booting up control plane ...
	I1028 18:20:44.380073   63347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:20:44.385254   63347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:20:44.386107   63347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:20:44.387033   63347 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:20:44.389147   63347 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:21:24.391988   63347 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:21:24.392115   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:21:24.392417   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:21:29.393134   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:21:29.393371   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:21:39.394063   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:21:39.394259   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:21:59.393483   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:21:59.393752   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:22:39.393421   63347 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:22:39.393582   63347 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:22:39.393614   63347 kubeadm.go:310] 
	I1028 18:22:39.393688   63347 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:22:39.393752   63347 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:22:39.393766   63347 kubeadm.go:310] 
	I1028 18:22:39.393818   63347 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:22:39.393870   63347 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:22:39.394032   63347 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:22:39.394042   63347 kubeadm.go:310] 
	I1028 18:22:39.394143   63347 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:22:39.394177   63347 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:22:39.394210   63347 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:22:39.394223   63347 kubeadm.go:310] 
	I1028 18:22:39.394366   63347 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:22:39.394438   63347 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:22:39.394452   63347 kubeadm.go:310] 
	I1028 18:22:39.394570   63347 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:22:39.394700   63347 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:22:39.394790   63347 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:22:39.394888   63347 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:22:39.394897   63347 kubeadm.go:310] 
	I1028 18:22:39.395418   63347 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:22:39.395527   63347 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:22:39.395601   63347 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:22:39.395672   63347 kubeadm.go:394] duration metric: took 3m55.177595185s to StartCluster
	I1028 18:22:39.395733   63347 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:22:39.395784   63347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:22:39.436785   63347 cri.go:89] found id: ""
	I1028 18:22:39.436809   63347 logs.go:282] 0 containers: []
	W1028 18:22:39.436816   63347 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:22:39.436824   63347 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:22:39.436875   63347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:22:39.476125   63347 cri.go:89] found id: ""
	I1028 18:22:39.476153   63347 logs.go:282] 0 containers: []
	W1028 18:22:39.476162   63347 logs.go:284] No container was found matching "etcd"
	I1028 18:22:39.476170   63347 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:22:39.476237   63347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:22:39.512948   63347 cri.go:89] found id: ""
	I1028 18:22:39.512972   63347 logs.go:282] 0 containers: []
	W1028 18:22:39.512979   63347 logs.go:284] No container was found matching "coredns"
	I1028 18:22:39.512986   63347 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:22:39.513030   63347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:22:39.545479   63347 cri.go:89] found id: ""
	I1028 18:22:39.545500   63347 logs.go:282] 0 containers: []
	W1028 18:22:39.545507   63347 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:22:39.545512   63347 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:22:39.545554   63347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:22:39.579039   63347 cri.go:89] found id: ""
	I1028 18:22:39.579068   63347 logs.go:282] 0 containers: []
	W1028 18:22:39.579078   63347 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:22:39.579085   63347 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:22:39.579147   63347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:22:39.611262   63347 cri.go:89] found id: ""
	I1028 18:22:39.611285   63347 logs.go:282] 0 containers: []
	W1028 18:22:39.611295   63347 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:22:39.611304   63347 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:22:39.611362   63347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:22:39.644391   63347 cri.go:89] found id: ""
	I1028 18:22:39.644420   63347 logs.go:282] 0 containers: []
	W1028 18:22:39.644428   63347 logs.go:284] No container was found matching "kindnet"
	I1028 18:22:39.644445   63347 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:22:39.644459   63347 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:22:39.755776   63347 logs.go:123] Gathering logs for container status ...
	I1028 18:22:39.755806   63347 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:22:39.793457   63347 logs.go:123] Gathering logs for kubelet ...
	I1028 18:22:39.793489   63347 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:22:39.852013   63347 logs.go:123] Gathering logs for dmesg ...
	I1028 18:22:39.852042   63347 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:22:39.866694   63347 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:22:39.866721   63347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:22:39.982451   63347 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1028 18:22:39.982485   63347 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:22:39.982525   63347 out.go:270] * 
	* 
	W1028 18:22:39.982580   63347 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:22:39.982597   63347 out.go:270] * 
	* 
	W1028 18:22:39.983437   63347 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:22:39.986344   63347 out.go:201] 
	W1028 18:22:39.987501   63347 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:22:39.987546   63347 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:22:39.987574   63347 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:22:39.988988   63347 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-223868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 6 (220.267736ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:22:40.252705   66077 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-223868" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (273.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-021370 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-021370 --alsologtostderr -v=3: exit status 82 (2m0.827117203s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-021370"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:20:51.249534   65148 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:20:51.249843   65148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:20:51.249857   65148 out.go:358] Setting ErrFile to fd 2...
	I1028 18:20:51.249864   65148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:20:51.250158   65148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:20:51.250497   65148 out.go:352] Setting JSON to false
	I1028 18:20:51.250592   65148 mustload.go:65] Loading cluster: embed-certs-021370
	I1028 18:20:51.251104   65148 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:20:51.251236   65148 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/config.json ...
	I1028 18:20:51.251451   65148 mustload.go:65] Loading cluster: embed-certs-021370
	I1028 18:20:51.251600   65148 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:20:51.251644   65148 stop.go:39] StopHost: embed-certs-021370
	I1028 18:20:51.252144   65148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:20:51.252205   65148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:20:51.267420   65148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I1028 18:20:51.267911   65148 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:20:51.268584   65148 main.go:141] libmachine: Using API Version  1
	I1028 18:20:51.268609   65148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:20:51.268912   65148 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:20:51.271048   65148 out.go:177] * Stopping node "embed-certs-021370"  ...
	I1028 18:20:51.272059   65148 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 18:20:51.272102   65148 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:20:51.272333   65148 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 18:20:51.272363   65148 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:20:51.275640   65148 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:20:51.276082   65148 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:19:55 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:20:51.276109   65148 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:20:51.276302   65148 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:20:51.276500   65148 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:20:51.276662   65148 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:20:51.276769   65148 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:20:51.402955   65148 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 18:20:51.463581   65148 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 18:20:51.532078   65148 main.go:141] libmachine: Stopping "embed-certs-021370"...
	I1028 18:20:51.532117   65148 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:20:51.533883   65148 main.go:141] libmachine: (embed-certs-021370) Calling .Stop
	I1028 18:20:51.537946   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 0/120
	I1028 18:20:52.539394   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 1/120
	I1028 18:20:53.541096   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 2/120
	I1028 18:20:54.542810   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 3/120
	I1028 18:20:55.544247   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 4/120
	I1028 18:20:56.545945   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 5/120
	I1028 18:20:57.547213   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 6/120
	I1028 18:20:58.548503   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 7/120
	I1028 18:20:59.550007   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 8/120
	I1028 18:21:00.551344   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 9/120
	I1028 18:21:01.553131   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 10/120
	I1028 18:21:02.554727   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 11/120
	I1028 18:21:03.556126   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 12/120
	I1028 18:21:04.557393   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 13/120
	I1028 18:21:05.558691   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 14/120
	I1028 18:21:06.560557   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 15/120
	I1028 18:21:07.561789   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 16/120
	I1028 18:21:08.563149   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 17/120
	I1028 18:21:09.564876   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 18/120
	I1028 18:21:10.566138   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 19/120
	I1028 18:21:11.568071   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 20/120
	I1028 18:21:12.569726   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 21/120
	I1028 18:21:13.571586   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 22/120
	I1028 18:21:14.572741   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 23/120
	I1028 18:21:15.574774   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 24/120
	I1028 18:21:16.576732   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 25/120
	I1028 18:21:17.578089   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 26/120
	I1028 18:21:18.579384   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 27/120
	I1028 18:21:19.580692   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 28/120
	I1028 18:21:20.867068   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 29/120
	I1028 18:21:21.868735   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 30/120
	I1028 18:21:22.870857   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 31/120
	I1028 18:21:23.872197   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 32/120
	I1028 18:21:24.873621   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 33/120
	I1028 18:21:25.875667   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 34/120
	I1028 18:21:26.877768   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 35/120
	I1028 18:21:27.879066   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 36/120
	I1028 18:21:28.880933   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 37/120
	I1028 18:21:29.883119   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 38/120
	I1028 18:21:30.884639   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 39/120
	I1028 18:21:31.886516   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 40/120
	I1028 18:21:32.887846   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 41/120
	I1028 18:21:33.889364   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 42/120
	I1028 18:21:34.890608   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 43/120
	I1028 18:21:35.892109   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 44/120
	I1028 18:21:36.894057   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 45/120
	I1028 18:21:37.895379   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 46/120
	I1028 18:21:38.896582   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 47/120
	I1028 18:21:39.898022   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 48/120
	I1028 18:21:40.899345   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 49/120
	I1028 18:21:41.901603   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 50/120
	I1028 18:21:42.903031   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 51/120
	I1028 18:21:43.904404   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 52/120
	I1028 18:21:44.905798   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 53/120
	I1028 18:21:45.907807   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 54/120
	I1028 18:21:46.909597   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 55/120
	I1028 18:21:47.911143   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 56/120
	I1028 18:21:48.912782   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 57/120
	I1028 18:21:49.915104   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 58/120
	I1028 18:21:50.917059   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 59/120
	I1028 18:21:51.919080   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 60/120
	I1028 18:21:52.921162   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 61/120
	I1028 18:21:53.922629   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 62/120
	I1028 18:21:54.924090   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 63/120
	I1028 18:21:55.925687   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 64/120
	I1028 18:21:56.927014   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 65/120
	I1028 18:21:57.928678   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 66/120
	I1028 18:21:58.930198   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 67/120
	I1028 18:21:59.932240   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 68/120
	I1028 18:22:00.933619   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 69/120
	I1028 18:22:01.935506   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 70/120
	I1028 18:22:02.937636   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 71/120
	I1028 18:22:03.938717   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 72/120
	I1028 18:22:04.939932   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 73/120
	I1028 18:22:05.941670   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 74/120
	I1028 18:22:06.943552   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 75/120
	I1028 18:22:07.945385   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 76/120
	I1028 18:22:08.947161   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 77/120
	I1028 18:22:09.949174   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 78/120
	I1028 18:22:10.950501   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 79/120
	I1028 18:22:11.952303   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 80/120
	I1028 18:22:12.953511   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 81/120
	I1028 18:22:13.955288   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 82/120
	I1028 18:22:14.956567   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 83/120
	I1028 18:22:15.957795   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 84/120
	I1028 18:22:16.959067   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 85/120
	I1028 18:22:17.960494   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 86/120
	I1028 18:22:18.962517   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 87/120
	I1028 18:22:19.964884   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 88/120
	I1028 18:22:20.966121   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 89/120
	I1028 18:22:21.967593   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 90/120
	I1028 18:22:22.968933   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 91/120
	I1028 18:22:23.970324   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 92/120
	I1028 18:22:24.971792   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 93/120
	I1028 18:22:25.973111   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 94/120
	I1028 18:22:26.974904   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 95/120
	I1028 18:22:27.976859   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 96/120
	I1028 18:22:28.979058   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 97/120
	I1028 18:22:29.980524   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 98/120
	I1028 18:22:30.982230   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 99/120
	I1028 18:22:31.984434   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 100/120
	I1028 18:22:32.985826   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 101/120
	I1028 18:22:33.987352   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 102/120
	I1028 18:22:34.988813   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 103/120
	I1028 18:22:35.990590   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 104/120
	I1028 18:22:36.992101   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 105/120
	I1028 18:22:37.993263   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 106/120
	I1028 18:22:38.994714   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 107/120
	I1028 18:22:39.996791   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 108/120
	I1028 18:22:40.998533   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 109/120
	I1028 18:22:42.000300   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 110/120
	I1028 18:22:43.001787   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 111/120
	I1028 18:22:44.003498   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 112/120
	I1028 18:22:45.004754   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 113/120
	I1028 18:22:46.006554   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 114/120
	I1028 18:22:47.008088   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 115/120
	I1028 18:22:48.009333   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 116/120
	I1028 18:22:49.011108   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 117/120
	I1028 18:22:50.012531   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 118/120
	I1028 18:22:51.014641   65148 main.go:141] libmachine: (embed-certs-021370) Waiting for machine to stop 119/120
	I1028 18:22:52.016176   65148 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 18:22:52.016232   65148 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 18:22:52.018101   65148 out.go:201] 
	W1028 18:22:52.019414   65148 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 18:22:52.019432   65148 out.go:270] * 
	* 
	W1028 18:22:52.022026   65148 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:22:52.023285   65148 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-021370 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370: exit status 3 (18.600473068s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:23:10.624828   66271 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.62:22: connect: no route to host
	E1028 18:23:10.624850   66271 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.62:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-021370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-051152 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-051152 --alsologtostderr -v=3: exit status 82 (2m0.601348298s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-051152"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:21:16.503534   65345 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:21:16.503657   65345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:21:16.503667   65345 out.go:358] Setting ErrFile to fd 2...
	I1028 18:21:16.503673   65345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:21:16.503864   65345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:21:16.504093   65345 out.go:352] Setting JSON to false
	I1028 18:21:16.504179   65345 mustload.go:65] Loading cluster: no-preload-051152
	I1028 18:21:16.504571   65345 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:21:16.504664   65345 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:21:16.504848   65345 mustload.go:65] Loading cluster: no-preload-051152
	I1028 18:21:16.504980   65345 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:21:16.505014   65345 stop.go:39] StopHost: no-preload-051152
	I1028 18:21:16.505401   65345 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:21:16.505457   65345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:21:16.520911   65345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I1028 18:21:16.521370   65345 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:21:16.521888   65345 main.go:141] libmachine: Using API Version  1
	I1028 18:21:16.521913   65345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:21:16.522223   65345 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:21:16.524427   65345 out.go:177] * Stopping node "no-preload-051152"  ...
	I1028 18:21:16.525629   65345 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 18:21:16.525659   65345 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:21:16.525854   65345 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 18:21:16.525887   65345 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:21:16.528604   65345 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:21:16.529003   65345 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:19:31 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:21:16.529037   65345 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:21:16.529166   65345 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:21:16.529330   65345 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:21:16.529498   65345 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:21:16.529631   65345 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:21:16.636215   65345 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 18:21:16.698178   65345 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 18:21:16.752890   65345 main.go:141] libmachine: Stopping "no-preload-051152"...
	I1028 18:21:16.752922   65345 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:21:16.754647   65345 main.go:141] libmachine: (no-preload-051152) Calling .Stop
	I1028 18:21:16.758528   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 0/120
	I1028 18:21:17.759894   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 1/120
	I1028 18:21:18.761177   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 2/120
	I1028 18:21:19.762981   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 3/120
	I1028 18:21:20.867076   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 4/120
	I1028 18:21:21.869044   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 5/120
	I1028 18:21:22.871032   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 6/120
	I1028 18:21:23.872490   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 7/120
	I1028 18:21:24.873858   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 8/120
	I1028 18:21:25.876051   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 9/120
	I1028 18:21:26.877937   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 10/120
	I1028 18:21:27.879294   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 11/120
	I1028 18:21:28.880933   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 12/120
	I1028 18:21:29.882997   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 13/120
	I1028 18:21:30.884560   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 14/120
	I1028 18:21:31.886541   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 15/120
	I1028 18:21:32.888033   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 16/120
	I1028 18:21:33.889598   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 17/120
	I1028 18:21:34.890849   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 18/120
	I1028 18:21:35.892253   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 19/120
	I1028 18:21:36.894388   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 20/120
	I1028 18:21:37.896535   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 21/120
	I1028 18:21:38.897503   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 22/120
	I1028 18:21:39.898613   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 23/120
	I1028 18:21:40.900168   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 24/120
	I1028 18:21:41.901823   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 25/120
	I1028 18:21:42.903993   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 26/120
	I1028 18:21:43.905267   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 27/120
	I1028 18:21:44.906621   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 28/120
	I1028 18:21:45.908012   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 29/120
	I1028 18:21:46.909893   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 30/120
	I1028 18:21:47.911390   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 31/120
	I1028 18:21:48.913098   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 32/120
	I1028 18:21:49.915401   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 33/120
	I1028 18:21:50.916945   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 34/120
	I1028 18:21:51.918769   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 35/120
	I1028 18:21:52.920423   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 36/120
	I1028 18:21:53.922041   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 37/120
	I1028 18:21:54.923269   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 38/120
	I1028 18:21:55.924903   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 39/120
	I1028 18:21:56.926911   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 40/120
	I1028 18:21:57.928526   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 41/120
	I1028 18:21:58.930067   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 42/120
	I1028 18:21:59.931439   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 43/120
	I1028 18:22:00.933160   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 44/120
	I1028 18:22:01.935335   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 45/120
	I1028 18:22:02.936683   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 46/120
	I1028 18:22:03.938117   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 47/120
	I1028 18:22:04.939600   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 48/120
	I1028 18:22:05.940863   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 49/120
	I1028 18:22:06.943066   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 50/120
	I1028 18:22:07.944774   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 51/120
	I1028 18:22:08.947054   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 52/120
	I1028 18:22:09.948271   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 53/120
	I1028 18:22:10.949417   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 54/120
	I1028 18:22:11.951519   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 55/120
	I1028 18:22:12.953020   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 56/120
	I1028 18:22:13.954431   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 57/120
	I1028 18:22:14.955689   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 58/120
	I1028 18:22:15.957216   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 59/120
	I1028 18:22:16.959317   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 60/120
	I1028 18:22:17.960931   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 61/120
	I1028 18:22:18.962381   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 62/120
	I1028 18:22:19.964675   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 63/120
	I1028 18:22:20.965921   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 64/120
	I1028 18:22:21.967805   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 65/120
	I1028 18:22:22.969294   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 66/120
	I1028 18:22:23.970530   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 67/120
	I1028 18:22:24.972042   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 68/120
	I1028 18:22:25.973946   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 69/120
	I1028 18:22:26.975625   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 70/120
	I1028 18:22:27.976851   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 71/120
	I1028 18:22:28.979180   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 72/120
	I1028 18:22:29.981197   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 73/120
	I1028 18:22:30.982539   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 74/120
	I1028 18:22:31.984336   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 75/120
	I1028 18:22:32.985695   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 76/120
	I1028 18:22:33.987085   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 77/120
	I1028 18:22:34.988511   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 78/120
	I1028 18:22:35.989813   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 79/120
	I1028 18:22:36.991394   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 80/120
	I1028 18:22:37.992925   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 81/120
	I1028 18:22:38.994427   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 82/120
	I1028 18:22:39.996593   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 83/120
	I1028 18:22:40.997715   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 84/120
	I1028 18:22:41.999385   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 85/120
	I1028 18:22:43.001155   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 86/120
	I1028 18:22:44.002527   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 87/120
	I1028 18:22:45.003998   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 88/120
	I1028 18:22:46.005386   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 89/120
	I1028 18:22:47.007616   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 90/120
	I1028 18:22:48.009106   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 91/120
	I1028 18:22:49.010955   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 92/120
	I1028 18:22:50.012243   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 93/120
	I1028 18:22:51.014375   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 94/120
	I1028 18:22:52.016605   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 95/120
	I1028 18:22:53.017915   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 96/120
	I1028 18:22:54.019188   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 97/120
	I1028 18:22:55.020412   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 98/120
	I1028 18:22:56.021667   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 99/120
	I1028 18:22:57.023562   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 100/120
	I1028 18:22:58.024781   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 101/120
	I1028 18:22:59.026848   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 102/120
	I1028 18:23:00.028130   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 103/120
	I1028 18:23:01.029511   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 104/120
	I1028 18:23:02.031524   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 105/120
	I1028 18:23:03.032743   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 106/120
	I1028 18:23:04.034020   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 107/120
	I1028 18:23:05.035512   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 108/120
	I1028 18:23:06.036738   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 109/120
	I1028 18:23:07.038567   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 110/120
	I1028 18:23:08.039823   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 111/120
	I1028 18:23:09.041154   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 112/120
	I1028 18:23:10.042405   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 113/120
	I1028 18:23:11.043703   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 114/120
	I1028 18:23:12.045567   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 115/120
	I1028 18:23:13.046684   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 116/120
	I1028 18:23:14.048050   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 117/120
	I1028 18:23:15.049426   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 118/120
	I1028 18:23:16.050789   65345 main.go:141] libmachine: (no-preload-051152) Waiting for machine to stop 119/120
	I1028 18:23:17.052276   65345 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 18:23:17.052338   65345 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 18:23:17.054250   65345 out.go:201] 
	W1028 18:23:17.055511   65345 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 18:23:17.055528   65345 out.go:270] * 
	* 
	W1028 18:23:17.058104   65345 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:23:17.059289   65345 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-051152 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152: exit status 3 (18.651993482s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:23:35.712795   66497 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.78:22: connect: no route to host
	E1028 18:23:35.712814   66497 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.78:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-051152" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-223868 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-223868 create -f testdata/busybox.yaml: exit status 1 (44.880326ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-223868" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-223868 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 6 (216.056857ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:22:40.515651   66118 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-223868" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 6 (216.486164ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:22:40.732268   66148 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-223868" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-223868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-223868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m51.540896511s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-223868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-223868 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-223868 describe deploy/metrics-server -n kube-system: exit status 1 (45.496967ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-223868" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-223868 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 6 (217.867222ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:24:32.535854   67017 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-223868" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-692033 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-692033 --alsologtostderr -v=3: exit status 82 (2m0.496397193s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-692033"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:23:03.806104   66400 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:23:03.806254   66400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:23:03.806266   66400 out.go:358] Setting ErrFile to fd 2...
	I1028 18:23:03.806277   66400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:23:03.806516   66400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:23:03.806809   66400 out.go:352] Setting JSON to false
	I1028 18:23:03.806917   66400 mustload.go:65] Loading cluster: default-k8s-diff-port-692033
	I1028 18:23:03.807356   66400 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:23:03.807453   66400 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:23:03.808016   66400 mustload.go:65] Loading cluster: default-k8s-diff-port-692033
	I1028 18:23:03.808348   66400 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:23:03.808423   66400 stop.go:39] StopHost: default-k8s-diff-port-692033
	I1028 18:23:03.809413   66400 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:23:03.809470   66400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:23:03.824224   66400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I1028 18:23:03.824708   66400 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:23:03.825242   66400 main.go:141] libmachine: Using API Version  1
	I1028 18:23:03.825265   66400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:23:03.825592   66400 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:23:03.830905   66400 out.go:177] * Stopping node "default-k8s-diff-port-692033"  ...
	I1028 18:23:03.832056   66400 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 18:23:03.832102   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:23:03.832316   66400 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 18:23:03.832338   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:23:03.835200   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:23:03.835624   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:21:36 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:23:03.835653   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:23:03.835763   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:23:03.835942   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:23:03.836085   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:23:03.836191   66400 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:23:03.943748   66400 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 18:23:04.003599   66400 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 18:23:04.074326   66400 main.go:141] libmachine: Stopping "default-k8s-diff-port-692033"...
	I1028 18:23:04.074367   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:23:04.075653   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Stop
	I1028 18:23:04.078710   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 0/120
	I1028 18:23:05.079784   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 1/120
	I1028 18:23:06.081013   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 2/120
	I1028 18:23:07.082254   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 3/120
	I1028 18:23:08.083610   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 4/120
	I1028 18:23:09.085595   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 5/120
	I1028 18:23:10.086695   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 6/120
	I1028 18:23:11.087877   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 7/120
	I1028 18:23:12.089201   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 8/120
	I1028 18:23:13.090472   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 9/120
	I1028 18:23:14.092589   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 10/120
	I1028 18:23:15.093833   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 11/120
	I1028 18:23:16.095098   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 12/120
	I1028 18:23:17.096452   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 13/120
	I1028 18:23:18.097862   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 14/120
	I1028 18:23:19.099794   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 15/120
	I1028 18:23:20.101067   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 16/120
	I1028 18:23:21.102969   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 17/120
	I1028 18:23:22.104430   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 18/120
	I1028 18:23:23.105571   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 19/120
	I1028 18:23:24.107874   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 20/120
	I1028 18:23:25.109150   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 21/120
	I1028 18:23:26.110484   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 22/120
	I1028 18:23:27.111819   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 23/120
	I1028 18:23:28.113088   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 24/120
	I1028 18:23:29.114773   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 25/120
	I1028 18:23:30.116109   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 26/120
	I1028 18:23:31.117296   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 27/120
	I1028 18:23:32.118682   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 28/120
	I1028 18:23:33.119874   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 29/120
	I1028 18:23:34.122116   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 30/120
	I1028 18:23:35.123343   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 31/120
	I1028 18:23:36.124558   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 32/120
	I1028 18:23:37.125855   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 33/120
	I1028 18:23:38.127094   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 34/120
	I1028 18:23:39.128866   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 35/120
	I1028 18:23:40.130107   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 36/120
	I1028 18:23:41.131327   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 37/120
	I1028 18:23:42.132530   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 38/120
	I1028 18:23:43.133828   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 39/120
	I1028 18:23:44.135928   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 40/120
	I1028 18:23:45.137039   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 41/120
	I1028 18:23:46.138270   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 42/120
	I1028 18:23:47.139538   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 43/120
	I1028 18:23:48.140746   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 44/120
	I1028 18:23:49.142662   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 45/120
	I1028 18:23:50.143894   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 46/120
	I1028 18:23:51.145248   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 47/120
	I1028 18:23:52.146559   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 48/120
	I1028 18:23:53.147820   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 49/120
	I1028 18:23:54.150036   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 50/120
	I1028 18:23:55.151347   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 51/120
	I1028 18:23:56.152774   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 52/120
	I1028 18:23:57.154134   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 53/120
	I1028 18:23:58.156090   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 54/120
	I1028 18:23:59.157994   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 55/120
	I1028 18:24:00.159350   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 56/120
	I1028 18:24:01.160612   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 57/120
	I1028 18:24:02.161991   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 58/120
	I1028 18:24:03.163156   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 59/120
	I1028 18:24:04.165224   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 60/120
	I1028 18:24:05.166613   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 61/120
	I1028 18:24:06.167832   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 62/120
	I1028 18:24:07.169248   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 63/120
	I1028 18:24:08.170504   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 64/120
	I1028 18:24:09.172286   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 65/120
	I1028 18:24:10.173576   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 66/120
	I1028 18:24:11.174835   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 67/120
	I1028 18:24:12.176105   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 68/120
	I1028 18:24:13.177285   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 69/120
	I1028 18:24:14.179322   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 70/120
	I1028 18:24:15.180683   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 71/120
	I1028 18:24:16.181920   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 72/120
	I1028 18:24:17.183297   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 73/120
	I1028 18:24:18.185324   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 74/120
	I1028 18:24:19.187571   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 75/120
	I1028 18:24:20.189400   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 76/120
	I1028 18:24:21.191013   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 77/120
	I1028 18:24:22.192436   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 78/120
	I1028 18:24:23.193793   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 79/120
	I1028 18:24:24.196031   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 80/120
	I1028 18:24:25.197456   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 81/120
	I1028 18:24:26.198839   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 82/120
	I1028 18:24:27.200222   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 83/120
	I1028 18:24:28.201456   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 84/120
	I1028 18:24:29.203586   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 85/120
	I1028 18:24:30.204811   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 86/120
	I1028 18:24:31.206251   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 87/120
	I1028 18:24:32.207320   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 88/120
	I1028 18:24:33.208552   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 89/120
	I1028 18:24:34.210890   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 90/120
	I1028 18:24:35.212411   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 91/120
	I1028 18:24:36.213639   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 92/120
	I1028 18:24:37.215114   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 93/120
	I1028 18:24:38.216309   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 94/120
	I1028 18:24:39.218383   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 95/120
	I1028 18:24:40.219786   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 96/120
	I1028 18:24:41.221119   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 97/120
	I1028 18:24:42.222395   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 98/120
	I1028 18:24:43.223660   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 99/120
	I1028 18:24:44.225675   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 100/120
	I1028 18:24:45.226948   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 101/120
	I1028 18:24:46.228191   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 102/120
	I1028 18:24:47.229507   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 103/120
	I1028 18:24:48.230686   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 104/120
	I1028 18:24:49.232486   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 105/120
	I1028 18:24:50.234027   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 106/120
	I1028 18:24:51.235315   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 107/120
	I1028 18:24:52.236629   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 108/120
	I1028 18:24:53.237938   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 109/120
	I1028 18:24:54.240031   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 110/120
	I1028 18:24:55.241425   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 111/120
	I1028 18:24:56.242770   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 112/120
	I1028 18:24:57.244081   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 113/120
	I1028 18:24:58.245348   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 114/120
	I1028 18:24:59.247295   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 115/120
	I1028 18:25:00.248620   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 116/120
	I1028 18:25:01.249918   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 117/120
	I1028 18:25:02.251182   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 118/120
	I1028 18:25:03.252416   66400 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for machine to stop 119/120
	I1028 18:25:04.252860   66400 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 18:25:04.252913   66400 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 18:25:04.254556   66400 out.go:201] 
	W1028 18:25:04.255930   66400 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 18:25:04.255948   66400 out.go:270] * 
	* 
	W1028 18:25:04.258627   66400 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:25:04.259878   66400 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-692033 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033: exit status 3 (18.459462303s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:25:22.720815   67283 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E1028 18:25:22.720844   67283 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-692033" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370: exit status 3 (3.167782839s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:23:13.792854   66434 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.62:22: connect: no route to host
	E1028 18:23:13.792880   66434 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.62:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-021370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-021370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152630417s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.62:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-021370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370: exit status 3 (3.063069041s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:23:23.008811   66554 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.62:22: connect: no route to host
	E1028 18:23:23.008836   66554 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.62:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-021370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152
E1028 18:23:38.395200   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152: exit status 3 (3.167729429s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:23:38.880745   66674 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.78:22: connect: no route to host
	E1028 18:23:38.880764   66674 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.78:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-051152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-051152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153509032s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.78:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-051152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152: exit status 3 (3.062429399s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:23:48.096755   66755 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.78:22: connect: no route to host
	E1028 18:23:48.096776   66755 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.78:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-051152" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (734.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-223868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-223868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m11.262315009s)

                                                
                                                
-- stdout --
	* [old-k8s-version-223868] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-223868" primary control-plane node in "old-k8s-version-223868" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:24:36.046160   67149 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:24:36.046245   67149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:24:36.046253   67149 out.go:358] Setting ErrFile to fd 2...
	I1028 18:24:36.046256   67149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:24:36.046398   67149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:24:36.046868   67149 out.go:352] Setting JSON to false
	I1028 18:24:36.047700   67149 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7619,"bootTime":1730132257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:24:36.047792   67149 start.go:139] virtualization: kvm guest
	I1028 18:24:36.049578   67149 out.go:177] * [old-k8s-version-223868] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:24:36.050654   67149 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:24:36.050667   67149 notify.go:220] Checking for updates...
	I1028 18:24:36.052848   67149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:24:36.053930   67149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:24:36.055111   67149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:24:36.056191   67149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:24:36.057238   67149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:24:36.058562   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:24:36.058933   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:24:36.058994   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:24:36.073890   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I1028 18:24:36.074249   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:24:36.074715   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:24:36.074737   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:24:36.075033   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:24:36.075203   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:24:36.076721   67149 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 18:24:36.077826   67149 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:24:36.078117   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:24:36.078163   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:24:36.092043   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I1028 18:24:36.092376   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:24:36.092819   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:24:36.092847   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:24:36.093146   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:24:36.093321   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:24:36.126426   67149 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:24:36.127544   67149 start.go:297] selected driver: kvm2
	I1028 18:24:36.127557   67149 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:24:36.127660   67149 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:24:36.128291   67149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:24:36.128356   67149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:24:36.142254   67149 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:24:36.142620   67149 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:24:36.142651   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:24:36.142691   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:24:36.142722   67149 start.go:340] cluster config:
	{Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:24:36.142823   67149 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:24:36.144157   67149 out.go:177] * Starting "old-k8s-version-223868" primary control-plane node in "old-k8s-version-223868" cluster
	I1028 18:24:36.145168   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:24:36.145202   67149 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 18:24:36.145211   67149 cache.go:56] Caching tarball of preloaded images
	I1028 18:24:36.145274   67149 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:24:36.145284   67149 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 18:24:36.145362   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:24:36.145519   67149 start.go:360] acquireMachinesLock for old-k8s-version-223868: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:28:19.893083   67149 start.go:364] duration metric: took 3m43.747535803s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:28:19.893161   67149 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:19.893170   67149 fix.go:54] fixHost starting: 
	I1028 18:28:19.893556   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:19.893608   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:19.909857   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I1028 18:28:19.910215   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:19.910669   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:28:19.910690   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:19.911049   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:19.911241   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:19.911395   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:28:19.912825   67149 fix.go:112] recreateIfNeeded on old-k8s-version-223868: state=Stopped err=<nil>
	I1028 18:28:19.912856   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	W1028 18:28:19.912996   67149 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:19.915041   67149 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	I1028 18:28:19.916422   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .Start
	I1028 18:28:19.916611   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:28:19.917295   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:28:19.917560   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:28:19.917951   67149 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:28:19.918628   67149 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:28:21.207941   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:28:21.209066   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.209518   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.209604   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.209495   68155 retry.go:31] will retry after 258.02952ms: waiting for machine to come up
	I1028 18:28:21.468599   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.469034   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.469052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.468996   68155 retry.go:31] will retry after 389.053264ms: waiting for machine to come up
	I1028 18:28:21.859493   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.859987   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.860017   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.859929   68155 retry.go:31] will retry after 454.438888ms: waiting for machine to come up
	I1028 18:28:22.315484   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.315961   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.315988   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.315904   68155 retry.go:31] will retry after 531.549561ms: waiting for machine to come up
	I1028 18:28:22.849247   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.849736   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.849791   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.849693   68155 retry.go:31] will retry after 602.202649ms: waiting for machine to come up
	I1028 18:28:23.453311   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:23.453859   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:23.453887   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:23.453796   68155 retry.go:31] will retry after 836.622626ms: waiting for machine to come up
	I1028 18:28:24.291959   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:24.292286   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:24.292315   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:24.292252   68155 retry.go:31] will retry after 1.187276744s: waiting for machine to come up
	I1028 18:28:25.480962   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:25.481398   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:25.481417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:25.481350   68155 retry.go:31] will retry after 1.417127806s: waiting for machine to come up
	I1028 18:28:26.900944   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:26.901481   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:26.901511   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:26.901426   68155 retry.go:31] will retry after 1.766762252s: waiting for machine to come up
	I1028 18:28:28.670334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:28.670798   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:28.670827   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:28.670742   68155 retry.go:31] will retry after 2.287152926s: waiting for machine to come up
	I1028 18:28:30.959639   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:30.959947   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:30.959963   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:30.959917   68155 retry.go:31] will retry after 1.799223833s: waiting for machine to come up
	I1028 18:28:32.761498   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:32.761941   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:32.761968   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:32.761894   68155 retry.go:31] will retry after 2.231065891s: waiting for machine to come up
	I1028 18:28:34.994438   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:34.994902   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:34.994936   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:34.994847   68155 retry.go:31] will retry after 4.079794439s: waiting for machine to come up
	I1028 18:28:39.079052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079556   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079584   67149 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:28:39.079593   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:28:39.079888   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.079919   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | skip adding static IP to network mk-old-k8s-version-223868 - found existing host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"}
	I1028 18:28:39.079935   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:28:39.079955   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:28:39.079971   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:28:39.082041   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.082354   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082480   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:28:39.082500   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:28:39.082528   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:39.082555   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:28:39.082567   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:28:39.204523   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:39.204883   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:28:39.205526   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.208073   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208434   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.208478   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208709   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:28:39.208907   67149 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:39.208926   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:39.209144   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.211109   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211407   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.211437   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.211739   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.211888   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.212033   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.212218   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.212388   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.212398   67149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:39.316528   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:39.316566   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.316813   67149 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:28:39.316841   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.317028   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.319389   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319687   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.319713   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319836   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.320017   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320167   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320310   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.320458   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.320642   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.320656   67149 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:28:39.439149   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:28:39.439179   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.441957   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442268   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.442300   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442528   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.442736   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.442940   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.443122   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.443304   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.443525   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.443550   67149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:39.561619   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:39.561651   67149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:39.561702   67149 buildroot.go:174] setting up certificates
	I1028 18:28:39.561716   67149 provision.go:84] configureAuth start
	I1028 18:28:39.561731   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.562015   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.564838   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565195   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.565229   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565373   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.567875   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568262   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.568287   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568452   67149 provision.go:143] copyHostCerts
	I1028 18:28:39.568534   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:39.568553   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:39.568621   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:39.568745   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:39.568768   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:39.568810   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:39.568899   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:39.568911   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:39.568937   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:39.569006   67149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:28:39.786398   67149 provision.go:177] copyRemoteCerts
	I1028 18:28:39.786449   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:39.786482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.789048   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789373   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.789417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789535   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.789733   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.789884   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.790013   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:39.871816   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:39.902889   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:28:39.932633   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:39.958581   67149 provision.go:87] duration metric: took 396.851161ms to configureAuth
	I1028 18:28:39.958609   67149 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:39.958796   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:28:39.958881   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.961667   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962019   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.962044   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962240   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.962468   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962671   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962850   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.963037   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.963220   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.963239   67149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:40.179808   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:40.179843   67149 machine.go:96] duration metric: took 970.91659ms to provisionDockerMachine
	I1028 18:28:40.179857   67149 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:28:40.179869   67149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:40.179917   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.180287   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:40.180319   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.183011   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183383   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.183411   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183578   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.183770   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.183964   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.184114   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.270445   67149 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:40.275798   67149 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:40.275825   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:40.275898   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:40.275995   67149 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:40.276108   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:40.287529   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:40.310860   67149 start.go:296] duration metric: took 130.989944ms for postStartSetup
	I1028 18:28:40.310899   67149 fix.go:56] duration metric: took 20.417730265s for fixHost
	I1028 18:28:40.310925   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.313613   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.313967   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.314000   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.314175   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.314354   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314518   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314692   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.314862   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:40.315021   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:40.315032   67149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:40.421204   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140120.384024791
	
	I1028 18:28:40.421225   67149 fix.go:216] guest clock: 1730140120.384024791
	I1028 18:28:40.421235   67149 fix.go:229] Guest: 2024-10-28 18:28:40.384024791 +0000 UTC Remote: 2024-10-28 18:28:40.310903937 +0000 UTC m=+244.300202669 (delta=73.120854ms)
	I1028 18:28:40.421259   67149 fix.go:200] guest clock delta is within tolerance: 73.120854ms
	I1028 18:28:40.421265   67149 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 20.528130845s
	I1028 18:28:40.421297   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.421574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:40.424700   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425088   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.425116   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425286   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.425971   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426188   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426266   67149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:40.426340   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.426604   67149 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:40.426632   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.429408   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429569   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429807   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.429841   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429950   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430059   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.430092   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.430123   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430236   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430383   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430459   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430616   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.430614   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.509203   67149 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:40.540019   67149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:40.701732   67149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:40.710264   67149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:40.710354   67149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:40.731373   67149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:40.731398   67149 start.go:495] detecting cgroup driver to use...
	I1028 18:28:40.731465   67149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:40.751312   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:40.766288   67149 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:40.766399   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:40.783995   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:40.800295   67149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:40.940688   67149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:41.101493   67149 docker.go:233] disabling docker service ...
	I1028 18:28:41.101562   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:41.123350   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:41.141744   67149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:41.279020   67149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:41.414748   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:41.429469   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:41.448611   67149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:28:41.448669   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.460766   67149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:41.460842   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.473021   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.485888   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.497498   67149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:41.509250   67149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:41.519701   67149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:41.519754   67149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:41.534596   67149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:41.544814   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:41.681203   67149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:41.786879   67149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:41.786957   67149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:41.791981   67149 start.go:563] Will wait 60s for crictl version
	I1028 18:28:41.792041   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:41.796034   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:41.839867   67149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:41.839958   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.873029   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.904534   67149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:28:41.906005   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:41.909278   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909683   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:41.909741   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909996   67149 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:41.915405   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:41.931747   67149 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:41.931886   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:28:41.931944   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:41.987909   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:41.987966   67149 ssh_runner.go:195] Run: which lz4
	I1028 18:28:41.993527   67149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:28:41.998982   67149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:28:41.999014   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:28:43.643480   67149 crio.go:462] duration metric: took 1.649982959s to copy over tarball
	I1028 18:28:43.643559   67149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:28:46.758767   67149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115173801s)
	I1028 18:28:46.758810   67149 crio.go:469] duration metric: took 3.115300284s to extract the tarball
	I1028 18:28:46.758821   67149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:28:46.816906   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:46.864347   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:46.864376   67149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:46.864499   67149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.864564   67149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.864623   67149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.864639   67149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.864674   67149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.864686   67149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:28:46.864710   67149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.864529   67149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:46.866383   67149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.866445   67149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.866493   67149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.866579   67149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.866795   67149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.867073   67149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.867095   67149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:28:46.867488   67149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.043358   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.053844   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.055684   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.056812   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.066211   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.090931   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.104900   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:28:47.141214   67149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:28:47.141260   67149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.141307   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202804   67149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:28:47.202863   67149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.202873   67149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:28:47.202903   67149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.202915   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202944   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.234811   67149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:28:47.234853   67149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.234900   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.236717   67149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:28:47.236751   67149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.236798   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.243872   67149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:28:47.243918   67149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.243971   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260210   67149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:28:47.260253   67149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:28:47.260256   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.260293   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260398   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.260438   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.260456   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.260517   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.260559   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413617   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.413776   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.413804   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413825   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.414063   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.414103   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.414150   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.544933   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.581577   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.582079   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.582161   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.582206   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.582344   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.582819   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.662237   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:28:47.736212   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.739757   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:28:47.739928   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:28:47.739802   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:28:47.739812   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:28:47.739841   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:28:47.783578   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:28:49.121698   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:49.266583   67149 cache_images.go:92] duration metric: took 2.402188013s to LoadCachedImages
	W1028 18:28:49.266686   67149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 18:28:49.266702   67149 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:28:49.266828   67149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:49.266918   67149 ssh_runner.go:195] Run: crio config
	I1028 18:28:49.318146   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:28:49.318167   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:49.318176   67149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:49.318193   67149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:28:49.318310   67149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:49.318371   67149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:28:49.329249   67149 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:49.329339   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:49.339379   67149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:28:49.359216   67149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:49.378289   67149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:28:49.397766   67149 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:49.401788   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:49.418204   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:49.558031   67149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:49.575443   67149 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:28:49.575469   67149 certs.go:194] generating shared ca certs ...
	I1028 18:28:49.575489   67149 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:49.575693   67149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:49.575746   67149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:49.575756   67149 certs.go:256] generating profile certs ...
	I1028 18:28:49.575859   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:28:49.575914   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:28:49.575951   67149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:28:49.576058   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:49.576092   67149 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:49.576103   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:49.576131   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:49.576162   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:49.576186   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:49.576238   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:49.576994   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:49.622814   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:49.653690   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:49.678975   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:49.707340   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:28:49.744836   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:28:49.776367   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:49.818999   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:28:49.847531   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:49.871924   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:49.897751   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:49.923267   67149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:49.939805   67149 ssh_runner.go:195] Run: openssl version
	I1028 18:28:49.945611   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:49.956191   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960862   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960916   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.966701   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:49.977882   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:49.990873   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995751   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995810   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:50.001891   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:50.013508   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:50.028132   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034144   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034217   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.041768   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:50.054079   67149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:50.058983   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:50.064802   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:50.070790   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:50.077090   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:50.083149   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:50.089232   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:50.095205   67149 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:50.095338   67149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:50.095411   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.139777   67149 cri.go:89] found id: ""
	I1028 18:28:50.139854   67149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:50.151967   67149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:50.151986   67149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:50.152040   67149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:50.163454   67149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:50.164876   67149 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:28:50.165798   67149 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-223868" cluster setting kubeconfig missing "old-k8s-version-223868" context setting]
	I1028 18:28:50.167121   67149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:50.169545   67149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:50.179447   67149 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.194
	I1028 18:28:50.179477   67149 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:50.179490   67149 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:50.179542   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.213891   67149 cri.go:89] found id: ""
	I1028 18:28:50.213963   67149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:50.231491   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:50.241752   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:50.241775   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:50.241829   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:50.252015   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:50.252075   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:50.263032   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:50.273500   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:50.273564   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:50.283603   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.293521   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:50.293567   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.303701   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:50.316202   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:50.316269   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:50.327841   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:50.341366   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:50.469586   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.507608   67149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037983659s)
	I1028 18:28:51.507645   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.733141   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.842228   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.947336   67149 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:51.947430   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.447618   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.947814   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.448476   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.947571   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.448371   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.947700   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.447735   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.948435   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.447531   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.947711   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.447782   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.947642   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.948256   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.447558   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.948018   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.448186   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.947565   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.447557   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.947946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.448522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.947533   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.447522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.948025   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.448136   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.948157   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.447635   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.947987   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.447581   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.947550   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.447977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.947491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.447960   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.947662   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.448201   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.947753   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.448116   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.948175   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.448521   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.947592   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.448427   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.948413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.448390   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.948518   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.447929   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.948106   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.948236   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.447535   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.948117   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.448197   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.948491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.948393   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.448406   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.947788   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.448100   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.947907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.447903   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.948305   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.448529   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.947708   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.447881   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.947572   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.448433   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.948299   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.447748   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.947863   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.448386   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.948082   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.447496   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.948285   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.947683   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.447813   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.947810   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.448413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.947477   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.448099   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.948269   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.447660   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.947559   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.447716   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.948569   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.447555   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.947612   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.448411   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.947786   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.447566   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.947886   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.448276   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.948547   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.447546   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.947974   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.448334   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.948183   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.448396   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.947620   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.448306   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.947486   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.448219   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.948295   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.447765   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.947468   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.448454   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.947488   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.447568   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.948070   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.448123   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.948178   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.447989   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.947888   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.448230   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.947692   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.448090   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.947996   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.447949   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.947977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.448130   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.948450   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:51.948545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:51.987428   67149 cri.go:89] found id: ""
	I1028 18:29:51.987459   67149 logs.go:282] 0 containers: []
	W1028 18:29:51.987470   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:51.987478   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:51.987534   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:52.021429   67149 cri.go:89] found id: ""
	I1028 18:29:52.021452   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.021460   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:52.021466   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:52.021509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:52.055338   67149 cri.go:89] found id: ""
	I1028 18:29:52.055362   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.055373   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:52.055380   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:52.055432   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:52.088673   67149 cri.go:89] found id: ""
	I1028 18:29:52.088697   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.088705   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:52.088711   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:52.088766   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:52.129833   67149 cri.go:89] found id: ""
	I1028 18:29:52.129854   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.129862   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:52.129867   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:52.129918   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:52.162994   67149 cri.go:89] found id: ""
	I1028 18:29:52.163029   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.163040   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:52.163047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:52.163105   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:52.196819   67149 cri.go:89] found id: ""
	I1028 18:29:52.196840   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.196848   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:52.196853   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:52.196906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:52.232924   67149 cri.go:89] found id: ""
	I1028 18:29:52.232955   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.232965   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:52.232977   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:52.232992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:52.283317   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:52.283353   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:52.296648   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:52.296673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:52.423396   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:52.423418   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:52.423429   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:52.497671   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:52.497704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:55.037920   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:55.052539   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:55.052602   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:55.089302   67149 cri.go:89] found id: ""
	I1028 18:29:55.089332   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.089343   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:55.089351   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:55.089404   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:55.127317   67149 cri.go:89] found id: ""
	I1028 18:29:55.127345   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.127352   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:55.127358   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:55.127413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:55.161689   67149 cri.go:89] found id: ""
	I1028 18:29:55.161714   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.161721   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:55.161727   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:55.161772   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:55.196494   67149 cri.go:89] found id: ""
	I1028 18:29:55.196521   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.196534   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:55.196542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:55.196596   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:55.234980   67149 cri.go:89] found id: ""
	I1028 18:29:55.235008   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.235020   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:55.235028   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:55.235086   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:55.274750   67149 cri.go:89] found id: ""
	I1028 18:29:55.274775   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.274783   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:55.274789   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:55.274842   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:55.309839   67149 cri.go:89] found id: ""
	I1028 18:29:55.309865   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.309874   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:55.309881   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:55.309943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:55.358765   67149 cri.go:89] found id: ""
	I1028 18:29:55.358793   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.358805   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:55.358816   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:55.358830   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:55.422821   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:55.422869   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:55.439458   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:55.439482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:55.507743   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:55.507764   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:55.507775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:55.582679   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:55.582710   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:58.124907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:58.139125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:58.139181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:58.178829   67149 cri.go:89] found id: ""
	I1028 18:29:58.178853   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.178864   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:58.178871   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:58.178933   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:58.212290   67149 cri.go:89] found id: ""
	I1028 18:29:58.212320   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.212336   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:58.212344   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:58.212402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:58.246108   67149 cri.go:89] found id: ""
	I1028 18:29:58.246135   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.246145   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:58.246152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:58.246212   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:58.280625   67149 cri.go:89] found id: ""
	I1028 18:29:58.280651   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.280662   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:58.280670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:58.280727   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:58.318755   67149 cri.go:89] found id: ""
	I1028 18:29:58.318783   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.318793   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:58.318801   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:58.318853   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:58.356452   67149 cri.go:89] found id: ""
	I1028 18:29:58.356487   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.356499   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:58.356506   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:58.356564   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:58.389906   67149 cri.go:89] found id: ""
	I1028 18:29:58.389928   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.389936   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:58.389943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:58.390001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:58.425883   67149 cri.go:89] found id: ""
	I1028 18:29:58.425911   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.425920   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:58.425929   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:58.425943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:58.484392   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:58.484433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:58.498133   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:58.498159   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:58.572358   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:58.572382   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:58.572397   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:58.654963   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:58.654997   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.196593   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:01.209622   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:01.209693   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:01.243682   67149 cri.go:89] found id: ""
	I1028 18:30:01.243708   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.243718   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:01.243726   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:01.243786   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:01.277617   67149 cri.go:89] found id: ""
	I1028 18:30:01.277646   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.277654   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:01.277660   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:01.277710   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:01.314028   67149 cri.go:89] found id: ""
	I1028 18:30:01.314055   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.314067   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:01.314081   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:01.314152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:01.350324   67149 cri.go:89] found id: ""
	I1028 18:30:01.350348   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.350356   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:01.350362   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:01.350415   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:01.385802   67149 cri.go:89] found id: ""
	I1028 18:30:01.385826   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.385834   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:01.385840   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:01.385883   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:01.421507   67149 cri.go:89] found id: ""
	I1028 18:30:01.421534   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.421545   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:01.421553   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:01.421611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:01.457285   67149 cri.go:89] found id: ""
	I1028 18:30:01.457314   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.457326   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:01.457333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:01.457380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:01.490962   67149 cri.go:89] found id: ""
	I1028 18:30:01.490984   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.490992   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:01.491000   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:01.491012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:01.559906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:01.559937   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:01.559962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:01.639455   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:01.639485   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.681968   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:01.681994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:01.736639   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:01.736672   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.251876   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:04.265639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:04.265711   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:04.300133   67149 cri.go:89] found id: ""
	I1028 18:30:04.300159   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.300167   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:04.300173   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:04.300228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:04.335723   67149 cri.go:89] found id: ""
	I1028 18:30:04.335749   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.335760   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:04.335767   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:04.335825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:04.373009   67149 cri.go:89] found id: ""
	I1028 18:30:04.373030   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.373040   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:04.373048   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:04.373113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:04.405969   67149 cri.go:89] found id: ""
	I1028 18:30:04.405993   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.406003   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:04.406011   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:04.406066   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:04.441067   67149 cri.go:89] found id: ""
	I1028 18:30:04.441095   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.441106   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:04.441112   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:04.441176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:04.475231   67149 cri.go:89] found id: ""
	I1028 18:30:04.475260   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.475270   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:04.475277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:04.475342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:04.512970   67149 cri.go:89] found id: ""
	I1028 18:30:04.512998   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.513009   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:04.513017   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:04.513078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:04.547857   67149 cri.go:89] found id: ""
	I1028 18:30:04.547880   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.547890   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:04.547901   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:04.547913   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:04.598870   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:04.598900   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.612678   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:04.612705   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:04.686945   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:04.686967   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:04.686979   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:04.764943   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:04.764992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.310905   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:07.323880   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:07.323946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:07.363597   67149 cri.go:89] found id: ""
	I1028 18:30:07.363626   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.363637   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:07.363645   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:07.363706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:07.401051   67149 cri.go:89] found id: ""
	I1028 18:30:07.401073   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.401082   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:07.401089   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:07.401147   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:07.439710   67149 cri.go:89] found id: ""
	I1028 18:30:07.439735   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.439743   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:07.439748   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:07.439796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:07.476627   67149 cri.go:89] found id: ""
	I1028 18:30:07.476653   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.476663   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:07.476670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:07.476747   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:07.508770   67149 cri.go:89] found id: ""
	I1028 18:30:07.508796   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.508807   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:07.508814   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:07.508874   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:07.543467   67149 cri.go:89] found id: ""
	I1028 18:30:07.543496   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.543506   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:07.543514   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:07.543575   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:07.577181   67149 cri.go:89] found id: ""
	I1028 18:30:07.577204   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.577212   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:07.577217   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:07.577266   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:07.611862   67149 cri.go:89] found id: ""
	I1028 18:30:07.611886   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.611896   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:07.611906   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:07.611924   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:07.699794   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:07.699833   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.747920   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:07.747948   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:07.797402   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:07.797434   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:07.811752   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:07.811778   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:07.881604   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.382191   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:10.394572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:10.394624   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:10.428941   67149 cri.go:89] found id: ""
	I1028 18:30:10.428973   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.428984   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:10.429004   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:10.429071   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:10.462526   67149 cri.go:89] found id: ""
	I1028 18:30:10.462558   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.462569   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:10.462578   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:10.462641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:10.498472   67149 cri.go:89] found id: ""
	I1028 18:30:10.498495   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.498503   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:10.498509   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:10.498557   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:10.535400   67149 cri.go:89] found id: ""
	I1028 18:30:10.535422   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.535430   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:10.535436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:10.535483   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:10.568961   67149 cri.go:89] found id: ""
	I1028 18:30:10.568981   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.568988   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:10.568994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:10.569041   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:10.601273   67149 cri.go:89] found id: ""
	I1028 18:30:10.601306   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.601318   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:10.601325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:10.601383   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:10.638093   67149 cri.go:89] found id: ""
	I1028 18:30:10.638124   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.638135   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:10.638141   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:10.638203   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:10.674624   67149 cri.go:89] found id: ""
	I1028 18:30:10.674654   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.674665   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:10.674675   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:10.674688   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:10.714568   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:10.714602   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:10.764732   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:10.764765   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:10.778111   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:10.778139   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:10.854488   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.854516   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:10.854531   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.438803   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:13.452322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:13.452397   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:13.487337   67149 cri.go:89] found id: ""
	I1028 18:30:13.487360   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.487369   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:13.487381   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:13.487488   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:13.521992   67149 cri.go:89] found id: ""
	I1028 18:30:13.522024   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.522034   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:13.522041   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:13.522099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:13.555315   67149 cri.go:89] found id: ""
	I1028 18:30:13.555347   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.555363   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:13.555371   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:13.555431   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:13.589401   67149 cri.go:89] found id: ""
	I1028 18:30:13.589425   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.589436   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:13.589445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:13.589493   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:13.629340   67149 cri.go:89] found id: ""
	I1028 18:30:13.629370   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.629385   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:13.629393   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:13.629454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:13.667307   67149 cri.go:89] found id: ""
	I1028 18:30:13.667337   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.667348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:13.667355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:13.667418   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:13.701457   67149 cri.go:89] found id: ""
	I1028 18:30:13.701513   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.701526   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:13.701536   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:13.701594   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:13.737989   67149 cri.go:89] found id: ""
	I1028 18:30:13.738023   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.738033   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:13.738043   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:13.738056   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:13.791743   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:13.791777   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:13.805501   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:13.805529   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:13.882239   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:13.882262   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:13.882276   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.963480   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:13.963516   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:16.502799   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:16.516397   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:16.516456   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:16.551670   67149 cri.go:89] found id: ""
	I1028 18:30:16.551701   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.551712   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:16.551719   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:16.551771   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:16.584390   67149 cri.go:89] found id: ""
	I1028 18:30:16.584417   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.584428   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:16.584435   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:16.584510   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:16.620868   67149 cri.go:89] found id: ""
	I1028 18:30:16.620892   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.620899   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:16.620904   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:16.620949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:16.654189   67149 cri.go:89] found id: ""
	I1028 18:30:16.654216   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.654225   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:16.654231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:16.654284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:16.694526   67149 cri.go:89] found id: ""
	I1028 18:30:16.694557   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.694568   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:16.694575   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:16.694640   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:16.728857   67149 cri.go:89] found id: ""
	I1028 18:30:16.728884   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.728892   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:16.728898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:16.728948   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:16.763198   67149 cri.go:89] found id: ""
	I1028 18:30:16.763220   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.763227   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:16.763232   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:16.763282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:16.800120   67149 cri.go:89] found id: ""
	I1028 18:30:16.800142   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.800149   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:16.800157   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:16.800167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:16.852710   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:16.852736   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:16.867365   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:16.867395   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:16.945605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:16.945627   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:16.945643   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:17.022838   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:17.022871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.563585   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:19.577612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:19.577683   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:19.615797   67149 cri.go:89] found id: ""
	I1028 18:30:19.615820   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.615829   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:19.615836   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:19.615882   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:19.654780   67149 cri.go:89] found id: ""
	I1028 18:30:19.654802   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.654810   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:19.654816   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:19.654873   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:19.693502   67149 cri.go:89] found id: ""
	I1028 18:30:19.693532   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.693542   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:19.693550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:19.693611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:19.731869   67149 cri.go:89] found id: ""
	I1028 18:30:19.731902   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.731910   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:19.731916   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:19.731974   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:19.765046   67149 cri.go:89] found id: ""
	I1028 18:30:19.765081   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.765092   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:19.765099   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:19.765158   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:19.798082   67149 cri.go:89] found id: ""
	I1028 18:30:19.798105   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.798113   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:19.798119   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:19.798172   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:19.832562   67149 cri.go:89] found id: ""
	I1028 18:30:19.832590   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.832601   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:19.832608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:19.832676   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:19.867213   67149 cri.go:89] found id: ""
	I1028 18:30:19.867240   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.867251   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:19.867260   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:19.867277   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:19.942276   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:19.942304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.977642   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:19.977671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:20.027077   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:20.027109   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:20.040159   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:20.040181   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:20.113350   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:22.614379   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:22.628550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:22.628607   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:22.662647   67149 cri.go:89] found id: ""
	I1028 18:30:22.662670   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.662677   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:22.662683   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:22.662732   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:22.696697   67149 cri.go:89] found id: ""
	I1028 18:30:22.696736   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.696747   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:22.696753   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:22.696815   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:22.730011   67149 cri.go:89] found id: ""
	I1028 18:30:22.730039   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.730049   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:22.730056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:22.730114   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:22.766604   67149 cri.go:89] found id: ""
	I1028 18:30:22.766629   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.766639   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:22.766647   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:22.766703   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:22.800581   67149 cri.go:89] found id: ""
	I1028 18:30:22.800608   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.800617   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:22.800625   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:22.800692   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:22.832742   67149 cri.go:89] found id: ""
	I1028 18:30:22.832767   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.832775   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:22.832780   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:22.832823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:22.865850   67149 cri.go:89] found id: ""
	I1028 18:30:22.865876   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.865885   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:22.865892   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:22.865949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:22.904410   67149 cri.go:89] found id: ""
	I1028 18:30:22.904433   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.904443   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:22.904454   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:22.904482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:22.959275   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:22.959310   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:22.972630   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:22.972652   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:23.043851   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:23.043873   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:23.043886   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:23.121657   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:23.121686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:25.662109   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:25.676366   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:25.676443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:25.715192   67149 cri.go:89] found id: ""
	I1028 18:30:25.715216   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.715224   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:25.715230   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:25.715283   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:25.754736   67149 cri.go:89] found id: ""
	I1028 18:30:25.754765   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.754773   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:25.754779   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:25.754823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:25.794179   67149 cri.go:89] found id: ""
	I1028 18:30:25.794207   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.794216   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:25.794224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:25.794278   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:25.833206   67149 cri.go:89] found id: ""
	I1028 18:30:25.833238   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.833246   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:25.833252   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:25.833298   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:25.871628   67149 cri.go:89] found id: ""
	I1028 18:30:25.871659   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.871669   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:25.871677   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:25.871735   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:25.910900   67149 cri.go:89] found id: ""
	I1028 18:30:25.910924   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.910934   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:25.910942   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:25.911001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:25.943972   67149 cri.go:89] found id: ""
	I1028 18:30:25.943992   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.943999   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:25.944004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:25.944059   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:25.982521   67149 cri.go:89] found id: ""
	I1028 18:30:25.982544   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.982551   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:25.982559   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:25.982569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:26.033003   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:26.033031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:26.046480   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:26.046503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:26.117194   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:26.117213   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:26.117230   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:26.195399   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:26.195430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:28.737237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:28.751846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:28.751910   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:28.794259   67149 cri.go:89] found id: ""
	I1028 18:30:28.794290   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.794301   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:28.794308   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:28.794374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:28.827573   67149 cri.go:89] found id: ""
	I1028 18:30:28.827603   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.827611   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:28.827616   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:28.827671   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:28.860676   67149 cri.go:89] found id: ""
	I1028 18:30:28.860702   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.860713   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:28.860721   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:28.860780   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:28.897302   67149 cri.go:89] found id: ""
	I1028 18:30:28.897327   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.897343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:28.897351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:28.897410   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:28.933425   67149 cri.go:89] found id: ""
	I1028 18:30:28.933454   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.933464   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:28.933471   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:28.933535   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:28.966004   67149 cri.go:89] found id: ""
	I1028 18:30:28.966032   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.966043   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:28.966051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:28.966107   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:29.002788   67149 cri.go:89] found id: ""
	I1028 18:30:29.002818   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.002829   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:29.002835   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:29.002894   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:29.033351   67149 cri.go:89] found id: ""
	I1028 18:30:29.033379   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.033389   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:29.033400   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:29.033420   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:29.107997   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:29.108025   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:29.144727   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:29.144753   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:29.206487   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:29.206521   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:29.219722   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:29.219744   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:29.288254   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:31.789035   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:31.802587   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:31.802650   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:31.838372   67149 cri.go:89] found id: ""
	I1028 18:30:31.838401   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.838410   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:31.838416   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:31.838469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:31.877794   67149 cri.go:89] found id: ""
	I1028 18:30:31.877822   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.877833   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:31.877840   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:31.877896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:31.917442   67149 cri.go:89] found id: ""
	I1028 18:30:31.917472   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.917483   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:31.917490   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:31.917549   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:31.951900   67149 cri.go:89] found id: ""
	I1028 18:30:31.951931   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.951943   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:31.951951   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:31.952008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:31.988011   67149 cri.go:89] found id: ""
	I1028 18:30:31.988040   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.988051   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:31.988058   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:31.988116   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:32.021042   67149 cri.go:89] found id: ""
	I1028 18:30:32.021063   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.021071   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:32.021077   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:32.021124   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:32.053748   67149 cri.go:89] found id: ""
	I1028 18:30:32.053770   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.053778   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:32.053783   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:32.053837   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:32.089725   67149 cri.go:89] found id: ""
	I1028 18:30:32.089756   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.089766   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:32.089777   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:32.089790   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:32.140000   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:32.140031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:32.154023   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:32.154046   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:32.231222   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:32.231242   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:32.231255   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:32.311354   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:32.311388   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:34.852507   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:34.867133   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:34.867198   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:34.901201   67149 cri.go:89] found id: ""
	I1028 18:30:34.901228   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.901238   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:34.901245   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:34.901300   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:34.962788   67149 cri.go:89] found id: ""
	I1028 18:30:34.962814   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.962824   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:34.962835   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:34.962896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:34.996879   67149 cri.go:89] found id: ""
	I1028 18:30:34.996906   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.996917   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:34.996926   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:34.996986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:35.033516   67149 cri.go:89] found id: ""
	I1028 18:30:35.033541   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.033553   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:35.033560   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:35.033622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:35.066903   67149 cri.go:89] found id: ""
	I1028 18:30:35.066933   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.066945   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:35.066953   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:35.067010   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:35.099675   67149 cri.go:89] found id: ""
	I1028 18:30:35.099697   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.099704   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:35.099710   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:35.099755   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:35.133595   67149 cri.go:89] found id: ""
	I1028 18:30:35.133623   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.133633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:35.133641   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:35.133699   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:35.172236   67149 cri.go:89] found id: ""
	I1028 18:30:35.172262   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.172272   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:35.172282   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:35.172296   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:35.224952   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:35.224981   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:35.238554   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:35.238578   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:35.318991   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:35.319024   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:35.319040   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:35.399763   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:35.399799   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:37.947847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:37.963147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:37.963210   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.001768   67149 cri.go:89] found id: ""
	I1028 18:30:38.001792   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.001802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:38.001809   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:38.001868   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:38.042877   67149 cri.go:89] found id: ""
	I1028 18:30:38.042905   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.042916   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:38.042924   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:38.042986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:38.078116   67149 cri.go:89] found id: ""
	I1028 18:30:38.078143   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.078154   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:38.078162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:38.078226   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:38.111082   67149 cri.go:89] found id: ""
	I1028 18:30:38.111108   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.111119   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:38.111127   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:38.111187   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:38.144863   67149 cri.go:89] found id: ""
	I1028 18:30:38.144889   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.144898   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:38.144906   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:38.144962   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:38.178671   67149 cri.go:89] found id: ""
	I1028 18:30:38.178701   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.178712   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:38.178719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:38.178774   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:38.218441   67149 cri.go:89] found id: ""
	I1028 18:30:38.218464   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.218472   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:38.218477   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:38.218528   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:38.252697   67149 cri.go:89] found id: ""
	I1028 18:30:38.252719   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.252727   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:38.252736   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:38.252745   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:38.304813   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:38.304853   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:38.318437   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:38.318462   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:38.389959   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:38.389987   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:38.390002   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:38.471462   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:38.471495   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:41.013647   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:41.027167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:41.027233   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:41.062559   67149 cri.go:89] found id: ""
	I1028 18:30:41.062590   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.062601   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:41.062609   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:41.062667   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:41.097732   67149 cri.go:89] found id: ""
	I1028 18:30:41.097758   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.097767   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:41.097773   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:41.097819   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:41.133067   67149 cri.go:89] found id: ""
	I1028 18:30:41.133089   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.133097   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:41.133102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:41.133150   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:41.168640   67149 cri.go:89] found id: ""
	I1028 18:30:41.168674   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.168684   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:41.168691   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:41.168754   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:41.206429   67149 cri.go:89] found id: ""
	I1028 18:30:41.206453   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.206463   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:41.206470   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:41.206527   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:41.248326   67149 cri.go:89] found id: ""
	I1028 18:30:41.248350   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.248360   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:41.248369   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:41.248429   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:41.283703   67149 cri.go:89] found id: ""
	I1028 18:30:41.283734   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.283746   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:41.283753   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:41.283810   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:41.327759   67149 cri.go:89] found id: ""
	I1028 18:30:41.327786   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.327796   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:41.327807   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:41.327820   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:41.388563   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:41.388593   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:41.406411   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:41.406435   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:41.490605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:41.490626   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:41.490637   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:41.569386   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:41.569433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.109394   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:44.123047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:44.123113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:44.156762   67149 cri.go:89] found id: ""
	I1028 18:30:44.156792   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.156802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:44.156810   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:44.156867   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:44.192244   67149 cri.go:89] found id: ""
	I1028 18:30:44.192271   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.192282   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:44.192289   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:44.192357   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:44.224059   67149 cri.go:89] found id: ""
	I1028 18:30:44.224094   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.224101   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:44.224115   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:44.224168   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:44.258750   67149 cri.go:89] found id: ""
	I1028 18:30:44.258779   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.258789   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:44.258797   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:44.258854   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:44.295600   67149 cri.go:89] found id: ""
	I1028 18:30:44.295624   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.295632   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:44.295638   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:44.295684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:44.327278   67149 cri.go:89] found id: ""
	I1028 18:30:44.327302   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.327309   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:44.327315   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:44.327370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:44.360734   67149 cri.go:89] found id: ""
	I1028 18:30:44.360760   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.360768   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:44.360774   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:44.360822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:44.398198   67149 cri.go:89] found id: ""
	I1028 18:30:44.398224   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.398234   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:44.398249   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:44.398261   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:44.476135   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:44.476167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.514073   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:44.514105   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:44.563001   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:44.563033   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:44.576882   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:44.576912   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:44.648532   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.149133   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:47.165612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:47.165696   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:47.203960   67149 cri.go:89] found id: ""
	I1028 18:30:47.203987   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.203996   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:47.204002   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:47.204065   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:47.236731   67149 cri.go:89] found id: ""
	I1028 18:30:47.236757   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.236766   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:47.236774   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:47.236828   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:47.273779   67149 cri.go:89] found id: ""
	I1028 18:30:47.273808   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.273820   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:47.273826   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:47.273878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:47.309996   67149 cri.go:89] found id: ""
	I1028 18:30:47.310020   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.310028   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:47.310034   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:47.310108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:47.352904   67149 cri.go:89] found id: ""
	I1028 18:30:47.352925   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.352934   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:47.352939   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:47.352990   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:47.389641   67149 cri.go:89] found id: ""
	I1028 18:30:47.389660   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.389667   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:47.389672   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:47.389718   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:47.422591   67149 cri.go:89] found id: ""
	I1028 18:30:47.422622   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.422632   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:47.422639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:47.422694   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:47.454849   67149 cri.go:89] found id: ""
	I1028 18:30:47.454876   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.454886   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:47.454895   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:47.454916   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:47.506176   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:47.506203   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:47.519084   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:47.519108   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:47.585660   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.585681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:47.585696   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:47.664904   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:47.664939   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:50.203775   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:50.216923   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:50.216992   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:50.252506   67149 cri.go:89] found id: ""
	I1028 18:30:50.252531   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.252541   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:50.252548   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:50.252608   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:50.288641   67149 cri.go:89] found id: ""
	I1028 18:30:50.288669   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.288678   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:50.288684   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:50.288739   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:50.322130   67149 cri.go:89] found id: ""
	I1028 18:30:50.322163   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.322174   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:50.322182   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:50.322240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:50.359508   67149 cri.go:89] found id: ""
	I1028 18:30:50.359536   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.359546   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:50.359554   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:50.359617   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:50.393571   67149 cri.go:89] found id: ""
	I1028 18:30:50.393607   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.393618   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:50.393626   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:50.393685   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:50.428683   67149 cri.go:89] found id: ""
	I1028 18:30:50.428705   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.428713   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:50.428719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:50.428767   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:50.464086   67149 cri.go:89] found id: ""
	I1028 18:30:50.464111   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.464119   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:50.464125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:50.464183   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:50.496695   67149 cri.go:89] found id: ""
	I1028 18:30:50.496726   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.496736   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:50.496745   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:50.496755   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:50.545495   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:50.545526   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:50.558819   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:50.558852   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:50.636344   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:50.636369   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:50.636384   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:50.720270   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:50.720304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.261194   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:53.274451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:53.274507   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:53.306258   67149 cri.go:89] found id: ""
	I1028 18:30:53.306286   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.306295   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:53.306301   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:53.306362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:53.340222   67149 cri.go:89] found id: ""
	I1028 18:30:53.340244   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.340253   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:53.340258   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:53.340322   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:53.377726   67149 cri.go:89] found id: ""
	I1028 18:30:53.377750   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.377760   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:53.377767   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:53.377820   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:53.414228   67149 cri.go:89] found id: ""
	I1028 18:30:53.414252   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.414262   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:53.414275   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:53.414332   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:53.449152   67149 cri.go:89] found id: ""
	I1028 18:30:53.449179   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.449186   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:53.449192   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:53.449237   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:53.485678   67149 cri.go:89] found id: ""
	I1028 18:30:53.485705   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.485716   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:53.485723   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:53.485784   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:53.520764   67149 cri.go:89] found id: ""
	I1028 18:30:53.520791   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.520802   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:53.520810   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:53.520870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:53.561153   67149 cri.go:89] found id: ""
	I1028 18:30:53.561176   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.561184   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:53.561192   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:53.561202   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:53.642192   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:53.642242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.686527   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:53.686567   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:53.740815   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:53.740849   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:53.754577   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:53.754604   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:53.823717   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.324847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:56.338572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:56.338628   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:56.375482   67149 cri.go:89] found id: ""
	I1028 18:30:56.375506   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.375517   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:56.375524   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:56.375580   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:56.407894   67149 cri.go:89] found id: ""
	I1028 18:30:56.407921   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.407931   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:56.407938   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:56.407993   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:56.447006   67149 cri.go:89] found id: ""
	I1028 18:30:56.447037   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.447048   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:56.447055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:56.447112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:56.483850   67149 cri.go:89] found id: ""
	I1028 18:30:56.483880   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.483890   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:56.483898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:56.483958   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:56.520008   67149 cri.go:89] found id: ""
	I1028 18:30:56.520038   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.520045   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:56.520051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:56.520099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:56.552567   67149 cri.go:89] found id: ""
	I1028 18:30:56.552592   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.552600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:56.552608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:56.552658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:56.591277   67149 cri.go:89] found id: ""
	I1028 18:30:56.591297   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.591305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:56.591311   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:56.591362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:56.632164   67149 cri.go:89] found id: ""
	I1028 18:30:56.632186   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.632194   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:56.632202   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:56.632219   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:56.683590   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:56.683623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:56.698509   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:56.698539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:56.777141   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.777171   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:56.777188   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:56.851801   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:56.851842   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.394266   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:59.408460   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:59.408545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:59.444066   67149 cri.go:89] found id: ""
	I1028 18:30:59.444092   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.444104   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:59.444112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:59.444165   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:59.479531   67149 cri.go:89] found id: ""
	I1028 18:30:59.479557   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.479568   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:59.479576   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:59.479622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:59.519467   67149 cri.go:89] found id: ""
	I1028 18:30:59.519489   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.519496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:59.519502   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:59.519546   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:59.551108   67149 cri.go:89] found id: ""
	I1028 18:30:59.551131   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.551140   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:59.551146   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:59.551197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:59.585875   67149 cri.go:89] found id: ""
	I1028 18:30:59.585899   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.585907   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:59.585912   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:59.585968   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:59.620571   67149 cri.go:89] found id: ""
	I1028 18:30:59.620595   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.620602   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:59.620608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:59.620655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:59.653927   67149 cri.go:89] found id: ""
	I1028 18:30:59.653954   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.653965   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:59.653972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:59.654039   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:59.689138   67149 cri.go:89] found id: ""
	I1028 18:30:59.689160   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.689168   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:59.689175   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:59.689185   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:59.768231   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:59.768270   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.811980   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:59.812007   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:59.864509   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:59.864543   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:59.879329   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:59.879354   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:59.950134   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.450237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:02.464689   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:02.464765   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:02.500938   67149 cri.go:89] found id: ""
	I1028 18:31:02.500964   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.500975   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:02.500982   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:02.501043   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:02.534580   67149 cri.go:89] found id: ""
	I1028 18:31:02.534608   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.534620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:02.534628   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:02.534684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:02.570203   67149 cri.go:89] found id: ""
	I1028 18:31:02.570224   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.570231   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:02.570237   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:02.570284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:02.606037   67149 cri.go:89] found id: ""
	I1028 18:31:02.606064   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.606072   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:02.606082   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:02.606135   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:02.640622   67149 cri.go:89] found id: ""
	I1028 18:31:02.640646   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.640656   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:02.640663   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:02.640723   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:02.676406   67149 cri.go:89] found id: ""
	I1028 18:31:02.676434   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.676444   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:02.676451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:02.676520   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:02.710284   67149 cri.go:89] found id: ""
	I1028 18:31:02.710308   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.710316   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:02.710322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:02.710376   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:02.750853   67149 cri.go:89] found id: ""
	I1028 18:31:02.750899   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.750910   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:02.750918   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:02.750929   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:02.825886   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.825913   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:02.825927   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:02.904828   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:02.904857   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:02.941886   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:02.941922   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:02.991603   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:02.991632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.505655   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:05.520582   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:05.520638   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:05.558724   67149 cri.go:89] found id: ""
	I1028 18:31:05.558753   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.558763   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:05.558770   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:05.558816   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:05.597864   67149 cri.go:89] found id: ""
	I1028 18:31:05.597885   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.597893   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:05.597898   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:05.597956   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:05.643571   67149 cri.go:89] found id: ""
	I1028 18:31:05.643602   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.643613   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:05.643620   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:05.643679   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:05.682010   67149 cri.go:89] found id: ""
	I1028 18:31:05.682039   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.682048   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:05.682053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:05.682106   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:05.716043   67149 cri.go:89] found id: ""
	I1028 18:31:05.716067   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.716080   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:05.716086   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:05.716134   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:05.750962   67149 cri.go:89] found id: ""
	I1028 18:31:05.750995   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.751010   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:05.751016   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:05.751078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:05.785059   67149 cri.go:89] found id: ""
	I1028 18:31:05.785111   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.785124   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:05.785132   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:05.785193   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:05.833525   67149 cri.go:89] found id: ""
	I1028 18:31:05.833550   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.833559   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:05.833566   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:05.833579   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:05.887766   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:05.887796   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.902575   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:05.902606   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:05.975082   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:05.975108   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:05.975122   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:06.050369   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:06.050396   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.593506   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:08.606188   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:08.606251   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:08.645186   67149 cri.go:89] found id: ""
	I1028 18:31:08.645217   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.645227   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:08.645235   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:08.645294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:08.680728   67149 cri.go:89] found id: ""
	I1028 18:31:08.680759   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.680771   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:08.680778   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:08.680833   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:08.714733   67149 cri.go:89] found id: ""
	I1028 18:31:08.714760   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.714772   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:08.714779   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:08.714844   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:08.750293   67149 cri.go:89] found id: ""
	I1028 18:31:08.750323   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.750333   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:08.750339   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:08.750390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:08.784521   67149 cri.go:89] found id: ""
	I1028 18:31:08.784548   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.784559   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:08.784566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:08.784629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:08.818808   67149 cri.go:89] found id: ""
	I1028 18:31:08.818838   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.818849   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:08.818857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:08.818920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:08.855575   67149 cri.go:89] found id: ""
	I1028 18:31:08.855608   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.855619   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:08.855633   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:08.855690   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:08.892996   67149 cri.go:89] found id: ""
	I1028 18:31:08.893024   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.893035   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:08.893045   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:08.893064   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.937056   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:08.937084   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:08.989013   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:08.989048   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:09.002048   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:09.002077   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:09.075247   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:09.075277   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:09.075290   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:11.654701   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:11.668066   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:11.668146   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:11.701666   67149 cri.go:89] found id: ""
	I1028 18:31:11.701693   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.701703   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:11.701710   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:11.701769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:11.738342   67149 cri.go:89] found id: ""
	I1028 18:31:11.738368   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.738376   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:11.738381   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:11.738428   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:11.772009   67149 cri.go:89] found id: ""
	I1028 18:31:11.772035   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.772045   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:11.772053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:11.772118   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:11.816210   67149 cri.go:89] found id: ""
	I1028 18:31:11.816237   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.816245   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:11.816251   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:11.816314   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:11.856675   67149 cri.go:89] found id: ""
	I1028 18:31:11.856704   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.856714   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:11.856722   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:11.856785   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:11.896566   67149 cri.go:89] found id: ""
	I1028 18:31:11.896592   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.896600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:11.896606   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:11.896665   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:11.932599   67149 cri.go:89] found id: ""
	I1028 18:31:11.932624   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.932633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:11.932640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:11.932704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:11.966952   67149 cri.go:89] found id: ""
	I1028 18:31:11.966982   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.967008   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:11.967019   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:11.967037   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:12.016465   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:12.016502   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:12.029314   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:12.029343   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:12.098906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:12.098936   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:12.098954   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:12.176440   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:12.176489   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:14.720173   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:14.733796   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:14.733848   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:14.774072   67149 cri.go:89] found id: ""
	I1028 18:31:14.774093   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.774100   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:14.774106   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:14.774152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:14.816116   67149 cri.go:89] found id: ""
	I1028 18:31:14.816145   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.816158   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:14.816166   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:14.816224   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:14.851167   67149 cri.go:89] found id: ""
	I1028 18:31:14.851189   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.851196   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:14.851202   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:14.851247   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:14.885887   67149 cri.go:89] found id: ""
	I1028 18:31:14.885918   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.885926   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:14.885931   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:14.885976   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:14.923787   67149 cri.go:89] found id: ""
	I1028 18:31:14.923815   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.923826   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:14.923833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:14.923892   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:14.960117   67149 cri.go:89] found id: ""
	I1028 18:31:14.960148   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.960160   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:14.960167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:14.960240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:14.998418   67149 cri.go:89] found id: ""
	I1028 18:31:14.998458   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.998470   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:14.998485   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:14.998545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:15.031985   67149 cri.go:89] found id: ""
	I1028 18:31:15.032005   67149 logs.go:282] 0 containers: []
	W1028 18:31:15.032014   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:15.032027   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:15.032038   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:15.045239   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:15.045264   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:15.118954   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:15.118978   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:15.118994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:15.200538   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:15.200569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:15.243581   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:15.243603   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:17.794670   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:17.808325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:17.808380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:17.841888   67149 cri.go:89] found id: ""
	I1028 18:31:17.841911   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.841919   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:17.841925   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:17.841979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:17.881241   67149 cri.go:89] found id: ""
	I1028 18:31:17.881261   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.881269   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:17.881274   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:17.881331   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:17.922394   67149 cri.go:89] found id: ""
	I1028 18:31:17.922419   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.922428   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:17.922434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:17.922498   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:17.963519   67149 cri.go:89] found id: ""
	I1028 18:31:17.963546   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.963558   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:17.963566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:17.963641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:18.003181   67149 cri.go:89] found id: ""
	I1028 18:31:18.003202   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.003209   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:18.003214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:18.003261   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:18.040305   67149 cri.go:89] found id: ""
	I1028 18:31:18.040338   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.040348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:18.040356   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:18.040413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:18.077671   67149 cri.go:89] found id: ""
	I1028 18:31:18.077696   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.077708   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:18.077715   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:18.077777   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:18.116155   67149 cri.go:89] found id: ""
	I1028 18:31:18.116176   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.116182   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:18.116190   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:18.116201   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:18.168343   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:18.168370   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:18.181962   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:18.181988   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:18.260227   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:18.260251   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:18.260265   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:18.346588   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:18.346620   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:20.885832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:20.899053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:20.899121   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:20.954770   67149 cri.go:89] found id: ""
	I1028 18:31:20.954797   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.954806   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:20.954812   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:20.954870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:20.989809   67149 cri.go:89] found id: ""
	I1028 18:31:20.989834   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.989842   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:20.989848   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:20.989900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:21.027150   67149 cri.go:89] found id: ""
	I1028 18:31:21.027179   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.027191   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:21.027199   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:21.027259   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:21.061235   67149 cri.go:89] found id: ""
	I1028 18:31:21.061260   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.061270   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:21.061277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:21.061337   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:21.095451   67149 cri.go:89] found id: ""
	I1028 18:31:21.095473   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.095481   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:21.095487   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:21.095540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:21.135576   67149 cri.go:89] found id: ""
	I1028 18:31:21.135598   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.135606   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:21.135612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:21.135660   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:21.170816   67149 cri.go:89] found id: ""
	I1028 18:31:21.170845   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.170854   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:21.170860   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:21.170920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:21.204616   67149 cri.go:89] found id: ""
	I1028 18:31:21.204649   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.204660   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:21.204672   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:21.204686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:21.254523   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:21.254556   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:21.267981   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:21.268005   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:21.336786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:21.336813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:21.336828   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:21.420596   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:21.420625   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:23.962346   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:23.976628   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:23.976697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:24.016418   67149 cri.go:89] found id: ""
	I1028 18:31:24.016444   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.016453   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:24.016458   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:24.016533   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:24.051448   67149 cri.go:89] found id: ""
	I1028 18:31:24.051474   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.051483   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:24.051488   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:24.051554   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:24.090787   67149 cri.go:89] found id: ""
	I1028 18:31:24.090816   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.090829   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:24.090836   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:24.090900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:24.126315   67149 cri.go:89] found id: ""
	I1028 18:31:24.126342   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.126349   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:24.126355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:24.126402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:24.161340   67149 cri.go:89] found id: ""
	I1028 18:31:24.161367   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.161379   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:24.161387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:24.161445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:24.195991   67149 cri.go:89] found id: ""
	I1028 18:31:24.196017   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.196028   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:24.196036   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:24.196084   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:24.229789   67149 cri.go:89] found id: ""
	I1028 18:31:24.229822   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.229837   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:24.229845   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:24.229906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:24.264724   67149 cri.go:89] found id: ""
	I1028 18:31:24.264748   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.264757   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:24.264765   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:24.264775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:24.303551   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:24.303574   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:24.351693   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:24.351725   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:24.364537   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:24.364566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:24.436935   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:24.436955   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:24.436966   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.014928   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:27.029540   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:27.029609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:27.064598   67149 cri.go:89] found id: ""
	I1028 18:31:27.064626   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.064636   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:27.064643   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:27.064704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:27.099432   67149 cri.go:89] found id: ""
	I1028 18:31:27.099455   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.099465   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:27.099472   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:27.099531   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:27.133961   67149 cri.go:89] found id: ""
	I1028 18:31:27.133996   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.134006   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:27.134012   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:27.134075   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:27.171976   67149 cri.go:89] found id: ""
	I1028 18:31:27.172003   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.172014   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:27.172022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:27.172092   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:27.205681   67149 cri.go:89] found id: ""
	I1028 18:31:27.205710   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.205721   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:27.205730   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:27.205793   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:27.244571   67149 cri.go:89] found id: ""
	I1028 18:31:27.244603   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.244612   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:27.244617   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:27.244661   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:27.281692   67149 cri.go:89] found id: ""
	I1028 18:31:27.281722   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.281738   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:27.281746   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:27.281800   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:27.335003   67149 cri.go:89] found id: ""
	I1028 18:31:27.335033   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.335041   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:27.335049   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:27.335066   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:27.353992   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:27.354017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:27.457103   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:27.457125   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:27.457136   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.537717   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:27.537746   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:27.579842   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:27.579870   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.133749   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:30.147518   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:30.147576   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:30.182683   67149 cri.go:89] found id: ""
	I1028 18:31:30.182711   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.182722   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:30.182729   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:30.182792   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:30.215088   67149 cri.go:89] found id: ""
	I1028 18:31:30.215109   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.215118   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:30.215124   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:30.215176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:30.250169   67149 cri.go:89] found id: ""
	I1028 18:31:30.250194   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.250202   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:30.250207   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:30.250284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:30.286028   67149 cri.go:89] found id: ""
	I1028 18:31:30.286055   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.286062   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:30.286069   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:30.286112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:30.320503   67149 cri.go:89] found id: ""
	I1028 18:31:30.320528   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.320539   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:30.320547   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:30.320604   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:30.352773   67149 cri.go:89] found id: ""
	I1028 18:31:30.352793   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.352800   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:30.352806   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:30.352859   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:30.385922   67149 cri.go:89] found id: ""
	I1028 18:31:30.385944   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.385951   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:30.385956   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:30.385999   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:30.421909   67149 cri.go:89] found id: ""
	I1028 18:31:30.421933   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.421945   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:30.421956   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:30.421971   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.470917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:30.470944   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:30.484033   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:30.484059   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:30.554810   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:30.554836   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:30.554850   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:30.634403   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:30.634432   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.182127   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:33.194994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:33.195063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:33.233076   67149 cri.go:89] found id: ""
	I1028 18:31:33.233098   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.233106   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:33.233112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:33.233160   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:33.266963   67149 cri.go:89] found id: ""
	I1028 18:31:33.266998   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.267021   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:33.267028   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:33.267083   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:33.305888   67149 cri.go:89] found id: ""
	I1028 18:31:33.305914   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.305922   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:33.305928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:33.305979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:33.339451   67149 cri.go:89] found id: ""
	I1028 18:31:33.339479   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.339489   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:33.339496   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:33.339555   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:33.375038   67149 cri.go:89] found id: ""
	I1028 18:31:33.375065   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.375073   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:33.375079   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:33.375125   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:33.409157   67149 cri.go:89] found id: ""
	I1028 18:31:33.409176   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.409183   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:33.409189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:33.409243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:33.449108   67149 cri.go:89] found id: ""
	I1028 18:31:33.449133   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.449149   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:33.449155   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:33.449227   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:33.491194   67149 cri.go:89] found id: ""
	I1028 18:31:33.491215   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.491224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:33.491232   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:33.491250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.530590   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:33.530618   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:33.581933   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:33.581962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:33.595387   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:33.595416   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:33.664855   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:33.664882   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:33.664899   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.242724   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:36.256152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:36.256221   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:36.292452   67149 cri.go:89] found id: ""
	I1028 18:31:36.292494   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.292504   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:36.292511   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:36.292568   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:36.325210   67149 cri.go:89] found id: ""
	I1028 18:31:36.325231   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.325238   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:36.325244   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:36.325293   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:36.356738   67149 cri.go:89] found id: ""
	I1028 18:31:36.356757   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.356764   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:36.356769   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:36.356827   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:36.389678   67149 cri.go:89] found id: ""
	I1028 18:31:36.389704   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.389712   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:36.389717   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:36.389775   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:36.422956   67149 cri.go:89] found id: ""
	I1028 18:31:36.422989   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.422998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:36.423005   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:36.423061   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:36.456877   67149 cri.go:89] found id: ""
	I1028 18:31:36.456904   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.456914   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:36.456921   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:36.456983   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:36.489728   67149 cri.go:89] found id: ""
	I1028 18:31:36.489758   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.489766   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:36.489772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:36.489829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:36.524307   67149 cri.go:89] found id: ""
	I1028 18:31:36.524338   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.524350   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:36.524360   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:36.524372   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:36.574771   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:36.574800   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:36.587485   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:36.587506   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:36.655922   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:36.655949   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:36.655962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.738312   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:36.738352   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.279425   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:39.293108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:39.293167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:39.325542   67149 cri.go:89] found id: ""
	I1028 18:31:39.325573   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.325584   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:39.325592   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:39.325656   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:39.357581   67149 cri.go:89] found id: ""
	I1028 18:31:39.357609   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.357620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:39.357627   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:39.357681   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:39.394833   67149 cri.go:89] found id: ""
	I1028 18:31:39.394853   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.394860   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:39.394866   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:39.394916   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:39.430151   67149 cri.go:89] found id: ""
	I1028 18:31:39.430178   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.430188   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:39.430196   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:39.430254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:39.468060   67149 cri.go:89] found id: ""
	I1028 18:31:39.468089   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.468100   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:39.468108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:39.468181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:39.503702   67149 cri.go:89] found id: ""
	I1028 18:31:39.503734   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.503752   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:39.503761   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:39.503829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:39.536193   67149 cri.go:89] found id: ""
	I1028 18:31:39.536221   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.536233   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:39.536240   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:39.536305   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:39.570194   67149 cri.go:89] found id: ""
	I1028 18:31:39.570215   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.570224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:39.570232   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:39.570245   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:39.647179   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:39.647207   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:39.647220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:39.725980   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:39.726012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.765671   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:39.765704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:39.818257   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:39.818289   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.332335   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:42.344964   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:42.345031   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:42.380904   67149 cri.go:89] found id: ""
	I1028 18:31:42.380926   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.380933   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:42.380938   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:42.380982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:42.414361   67149 cri.go:89] found id: ""
	I1028 18:31:42.414385   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.414393   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:42.414399   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:42.414443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:42.447931   67149 cri.go:89] found id: ""
	I1028 18:31:42.447957   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.447968   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:42.447975   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:42.448024   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:42.483262   67149 cri.go:89] found id: ""
	I1028 18:31:42.483283   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.483296   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:42.483301   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:42.483365   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:42.516665   67149 cri.go:89] found id: ""
	I1028 18:31:42.516693   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.516702   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:42.516709   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:42.516776   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:42.550160   67149 cri.go:89] found id: ""
	I1028 18:31:42.550181   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.550188   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:42.550193   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:42.550238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:42.583509   67149 cri.go:89] found id: ""
	I1028 18:31:42.583535   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.583546   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:42.583552   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:42.583611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:42.619276   67149 cri.go:89] found id: ""
	I1028 18:31:42.619312   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.619320   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:42.619328   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:42.619338   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:42.692442   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:42.692487   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:42.731768   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:42.731798   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:42.783997   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:42.784043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.797809   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:42.797834   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:42.863351   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.363648   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:45.376277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:45.376341   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:45.415231   67149 cri.go:89] found id: ""
	I1028 18:31:45.415255   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.415265   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:45.415273   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:45.415330   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:45.451133   67149 cri.go:89] found id: ""
	I1028 18:31:45.451157   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.451164   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:45.451170   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:45.451228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:45.483526   67149 cri.go:89] found id: ""
	I1028 18:31:45.483552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.483562   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:45.483567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:45.483621   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:45.515799   67149 cri.go:89] found id: ""
	I1028 18:31:45.515828   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.515838   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:45.515846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:45.515906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:45.548043   67149 cri.go:89] found id: ""
	I1028 18:31:45.548071   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.548082   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:45.548090   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:45.548153   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:45.581525   67149 cri.go:89] found id: ""
	I1028 18:31:45.581552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.581563   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:45.581570   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:45.581629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:45.622258   67149 cri.go:89] found id: ""
	I1028 18:31:45.622282   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.622290   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:45.622296   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:45.622353   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:45.661255   67149 cri.go:89] found id: ""
	I1028 18:31:45.661275   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.661284   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:45.661292   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:45.661304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:45.675209   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:45.675242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:45.737546   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.737573   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:45.737592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:45.816012   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:45.816043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:45.854135   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:45.854167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.406233   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:48.418950   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:48.419001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:48.452933   67149 cri.go:89] found id: ""
	I1028 18:31:48.452952   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.452961   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:48.452975   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:48.453034   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:48.489604   67149 cri.go:89] found id: ""
	I1028 18:31:48.489630   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.489640   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:48.489648   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:48.489706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:48.525463   67149 cri.go:89] found id: ""
	I1028 18:31:48.525493   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.525504   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:48.525511   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:48.525566   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:48.559266   67149 cri.go:89] found id: ""
	I1028 18:31:48.559294   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.559302   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:48.559308   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:48.559363   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:48.592670   67149 cri.go:89] found id: ""
	I1028 18:31:48.592695   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.592706   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:48.592714   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:48.592769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:48.627175   67149 cri.go:89] found id: ""
	I1028 18:31:48.627196   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.627205   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:48.627213   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:48.627260   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:48.661864   67149 cri.go:89] found id: ""
	I1028 18:31:48.661887   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.661895   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:48.661901   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:48.661946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:48.696731   67149 cri.go:89] found id: ""
	I1028 18:31:48.696756   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.696765   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:48.696775   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:48.696788   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.745390   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:48.745417   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:48.759218   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:48.759241   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:48.830299   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:48.830331   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:48.830349   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:48.909934   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:48.909963   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.451597   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:51.464889   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:51.464943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:51.499962   67149 cri.go:89] found id: ""
	I1028 18:31:51.499990   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.500001   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:51.500010   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:51.500069   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:51.532341   67149 cri.go:89] found id: ""
	I1028 18:31:51.532370   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.532380   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:51.532388   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:51.532443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:51.565531   67149 cri.go:89] found id: ""
	I1028 18:31:51.565554   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.565561   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:51.565567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:51.565614   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:51.602859   67149 cri.go:89] found id: ""
	I1028 18:31:51.602882   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.602894   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:51.602899   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:51.602943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:51.639896   67149 cri.go:89] found id: ""
	I1028 18:31:51.639915   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.639922   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:51.639928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:51.639972   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:51.675728   67149 cri.go:89] found id: ""
	I1028 18:31:51.675755   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.675762   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:51.675768   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:51.675825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:51.710285   67149 cri.go:89] found id: ""
	I1028 18:31:51.710312   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.710320   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:51.710326   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:51.710374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:51.744527   67149 cri.go:89] found id: ""
	I1028 18:31:51.744551   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.744560   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:51.744570   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:51.744584   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.780580   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:51.780614   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:51.832979   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:51.833008   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:51.846389   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:51.846415   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:51.918177   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:51.918196   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:51.918210   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.493806   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:54.506468   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:54.506526   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:54.540500   67149 cri.go:89] found id: ""
	I1028 18:31:54.540527   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.540537   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:54.540544   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:54.540601   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:54.573399   67149 cri.go:89] found id: ""
	I1028 18:31:54.573428   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.573438   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:54.573448   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:54.573509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:54.606227   67149 cri.go:89] found id: ""
	I1028 18:31:54.606262   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.606272   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:54.606278   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:54.606338   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:54.641143   67149 cri.go:89] found id: ""
	I1028 18:31:54.641163   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.641172   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:54.641179   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:54.641238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:54.674269   67149 cri.go:89] found id: ""
	I1028 18:31:54.674292   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.674300   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:54.674306   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:54.674352   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:54.707160   67149 cri.go:89] found id: ""
	I1028 18:31:54.707183   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.707191   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:54.707197   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:54.707242   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:54.746522   67149 cri.go:89] found id: ""
	I1028 18:31:54.746544   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.746552   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:54.746558   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:54.746613   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:54.779315   67149 cri.go:89] found id: ""
	I1028 18:31:54.779341   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.779348   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:54.779356   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:54.779367   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:54.830987   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:54.831017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:54.844846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:54.844871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:54.913540   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:54.913558   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:54.913568   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.994220   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:54.994250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:57.532820   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:57.545394   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:57.545454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:57.582329   67149 cri.go:89] found id: ""
	I1028 18:31:57.582355   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.582365   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:57.582372   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:57.582438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:57.616082   67149 cri.go:89] found id: ""
	I1028 18:31:57.616107   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.616115   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:57.616123   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:57.616167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:57.650118   67149 cri.go:89] found id: ""
	I1028 18:31:57.650144   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.650153   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:57.650162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:57.650215   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:57.684801   67149 cri.go:89] found id: ""
	I1028 18:31:57.684823   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.684831   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:57.684839   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:57.684887   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:57.722396   67149 cri.go:89] found id: ""
	I1028 18:31:57.722423   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.722431   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:57.722437   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:57.722516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:57.759779   67149 cri.go:89] found id: ""
	I1028 18:31:57.759802   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.759809   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:57.759818   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:57.759861   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:57.793977   67149 cri.go:89] found id: ""
	I1028 18:31:57.794034   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.794045   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:57.794053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:57.794117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:57.831104   67149 cri.go:89] found id: ""
	I1028 18:31:57.831130   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.831140   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:57.831151   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:57.831164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:57.920155   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:57.920174   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:57.920184   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:57.999677   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:57.999709   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:58.036647   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:58.036673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:58.088299   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:58.088333   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.601832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:00.615434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:00.615491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:00.653344   67149 cri.go:89] found id: ""
	I1028 18:32:00.653372   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.653383   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:00.653390   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:00.653450   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:00.693086   67149 cri.go:89] found id: ""
	I1028 18:32:00.693111   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.693122   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:00.693130   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:00.693188   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:00.728129   67149 cri.go:89] found id: ""
	I1028 18:32:00.728157   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.728167   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:00.728181   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:00.728243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:00.760540   67149 cri.go:89] found id: ""
	I1028 18:32:00.760568   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.760579   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:00.760586   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:00.760654   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:00.796633   67149 cri.go:89] found id: ""
	I1028 18:32:00.796662   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.796672   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:00.796680   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:00.796740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:00.829924   67149 cri.go:89] found id: ""
	I1028 18:32:00.829954   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.829966   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:00.829974   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:00.830028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:00.861565   67149 cri.go:89] found id: ""
	I1028 18:32:00.861586   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.861593   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:00.861599   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:00.861655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:00.894129   67149 cri.go:89] found id: ""
	I1028 18:32:00.894154   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.894162   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:00.894169   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:00.894180   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.908303   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:00.908331   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:00.974521   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:00.974543   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:00.974557   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:01.048113   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:01.048140   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:01.086657   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:01.086731   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.639781   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:03.652239   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:03.652291   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:03.687098   67149 cri.go:89] found id: ""
	I1028 18:32:03.687120   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.687129   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:03.687135   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:03.687181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:03.722176   67149 cri.go:89] found id: ""
	I1028 18:32:03.722206   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.722217   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:03.722225   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:03.722282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:03.757489   67149 cri.go:89] found id: ""
	I1028 18:32:03.757512   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.757520   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:03.757526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:03.757571   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:03.795359   67149 cri.go:89] found id: ""
	I1028 18:32:03.795400   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.795411   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:03.795429   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:03.795489   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:03.830919   67149 cri.go:89] found id: ""
	I1028 18:32:03.830945   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.830953   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:03.830958   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:03.831008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:03.863396   67149 cri.go:89] found id: ""
	I1028 18:32:03.863425   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.863437   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:03.863445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:03.863516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:03.897085   67149 cri.go:89] found id: ""
	I1028 18:32:03.897112   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.897121   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:03.897128   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:03.897189   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:03.929439   67149 cri.go:89] found id: ""
	I1028 18:32:03.929467   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.929478   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:03.929487   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:03.929503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.982917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:03.982943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:03.996333   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:03.996355   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:04.062786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:04.062813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:04.062827   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:04.143988   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:04.144016   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:06.683977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:06.696605   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:06.696680   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:06.733031   67149 cri.go:89] found id: ""
	I1028 18:32:06.733060   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.733070   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:06.733078   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:06.733138   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:06.769196   67149 cri.go:89] found id: ""
	I1028 18:32:06.769218   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.769225   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:06.769231   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:06.769280   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:06.806938   67149 cri.go:89] found id: ""
	I1028 18:32:06.806959   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.806966   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:06.806972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:06.807017   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:06.839506   67149 cri.go:89] found id: ""
	I1028 18:32:06.839528   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.839537   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:06.839542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:06.839587   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:06.878275   67149 cri.go:89] found id: ""
	I1028 18:32:06.878300   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.878309   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:06.878317   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:06.878382   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:06.916336   67149 cri.go:89] found id: ""
	I1028 18:32:06.916366   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.916374   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:06.916381   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:06.916434   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:06.971413   67149 cri.go:89] found id: ""
	I1028 18:32:06.971435   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.971443   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:06.971449   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:06.971494   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:07.004432   67149 cri.go:89] found id: ""
	I1028 18:32:07.004464   67149 logs.go:282] 0 containers: []
	W1028 18:32:07.004485   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:07.004496   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:07.004509   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:07.081741   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:07.081780   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:07.122022   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:07.122053   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:07.169470   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:07.169496   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:07.183433   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:07.183459   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:07.251765   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:09.752773   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:09.766042   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:09.766119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:09.802881   67149 cri.go:89] found id: ""
	I1028 18:32:09.802911   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.802923   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:09.802930   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:09.802987   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:09.840269   67149 cri.go:89] found id: ""
	I1028 18:32:09.840292   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.840300   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:09.840305   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:09.840370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:09.874654   67149 cri.go:89] found id: ""
	I1028 18:32:09.874679   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.874689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:09.874696   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:09.874752   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:09.910328   67149 cri.go:89] found id: ""
	I1028 18:32:09.910350   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.910358   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:09.910365   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:09.910425   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:09.942717   67149 cri.go:89] found id: ""
	I1028 18:32:09.942744   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.942752   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:09.942757   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:09.942814   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:09.975644   67149 cri.go:89] found id: ""
	I1028 18:32:09.975674   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.975685   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:09.975692   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:09.975750   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:10.008257   67149 cri.go:89] found id: ""
	I1028 18:32:10.008294   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.008305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:10.008313   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:10.008373   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:10.041678   67149 cri.go:89] found id: ""
	I1028 18:32:10.041705   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.041716   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:10.041726   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:10.041739   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:10.090474   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:10.090503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:10.103846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:10.103874   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:10.172819   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:10.172847   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:10.172862   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:10.251927   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:10.251955   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:12.795985   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:12.810859   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:12.810921   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:12.849897   67149 cri.go:89] found id: ""
	I1028 18:32:12.849925   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.849934   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:12.849940   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:12.850003   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:12.883007   67149 cri.go:89] found id: ""
	I1028 18:32:12.883034   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.883045   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:12.883052   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:12.883111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:12.917458   67149 cri.go:89] found id: ""
	I1028 18:32:12.917485   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.917496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:12.917503   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:12.917561   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:12.950531   67149 cri.go:89] found id: ""
	I1028 18:32:12.950558   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.950568   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:12.950576   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:12.950631   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:12.983902   67149 cri.go:89] found id: ""
	I1028 18:32:12.983929   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.983937   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:12.983943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:12.983986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:13.017486   67149 cri.go:89] found id: ""
	I1028 18:32:13.017513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.017521   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:13.017526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:13.017582   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:13.050553   67149 cri.go:89] found id: ""
	I1028 18:32:13.050582   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.050594   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:13.050601   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:13.050658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:13.083489   67149 cri.go:89] found id: ""
	I1028 18:32:13.083513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.083520   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:13.083528   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:13.083537   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:13.137451   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:13.137482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:13.153154   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:13.153179   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:13.221043   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:13.221066   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:13.221080   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:13.299930   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:13.299960   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:15.850484   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:15.862930   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:15.862982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:15.895625   67149 cri.go:89] found id: ""
	I1028 18:32:15.895643   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.895651   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:15.895657   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:15.895701   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:15.928073   67149 cri.go:89] found id: ""
	I1028 18:32:15.928103   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.928113   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:15.928120   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:15.928180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:15.962261   67149 cri.go:89] found id: ""
	I1028 18:32:15.962282   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.962290   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:15.962295   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:15.962342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:15.999177   67149 cri.go:89] found id: ""
	I1028 18:32:15.999206   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.999216   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:15.999224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:15.999282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:16.033098   67149 cri.go:89] found id: ""
	I1028 18:32:16.033126   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.033138   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:16.033145   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:16.033208   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:16.067049   67149 cri.go:89] found id: ""
	I1028 18:32:16.067071   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.067083   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:16.067089   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:16.067145   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:16.106936   67149 cri.go:89] found id: ""
	I1028 18:32:16.106970   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.106981   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:16.106988   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:16.107044   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:16.141702   67149 cri.go:89] found id: ""
	I1028 18:32:16.141729   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.141741   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:16.141751   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:16.141762   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:16.178772   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:16.178803   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:16.230851   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:16.230878   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:16.244489   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:16.244514   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:16.319362   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:16.319389   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:16.319405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:18.899694   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:18.913287   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:18.913358   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:18.954136   67149 cri.go:89] found id: ""
	I1028 18:32:18.954158   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.954165   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:18.954170   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:18.954218   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:18.987427   67149 cri.go:89] found id: ""
	I1028 18:32:18.987449   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.987457   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:18.987462   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:18.987505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:19.022067   67149 cri.go:89] found id: ""
	I1028 18:32:19.022099   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.022110   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:19.022118   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:19.022167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:19.054533   67149 cri.go:89] found id: ""
	I1028 18:32:19.054560   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.054570   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:19.054578   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:19.054644   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:19.099324   67149 cri.go:89] found id: ""
	I1028 18:32:19.099356   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.099367   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:19.099375   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:19.099436   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:19.146437   67149 cri.go:89] found id: ""
	I1028 18:32:19.146463   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.146470   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:19.146478   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:19.146540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:19.192027   67149 cri.go:89] found id: ""
	I1028 18:32:19.192053   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.192070   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:19.192078   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:19.192140   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:19.228411   67149 cri.go:89] found id: ""
	I1028 18:32:19.228437   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.228447   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:19.228457   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:19.228480   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:19.313151   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:19.313183   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:19.352117   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:19.352142   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:19.402772   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:19.402805   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:19.416148   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:19.416167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:19.483098   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:21.983420   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:21.997129   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:21.997180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:22.035600   67149 cri.go:89] found id: ""
	I1028 18:32:22.035622   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.035631   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:22.035637   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:22.035684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:22.073413   67149 cri.go:89] found id: ""
	I1028 18:32:22.073440   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.073450   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:22.073458   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:22.073505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:22.108637   67149 cri.go:89] found id: ""
	I1028 18:32:22.108663   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.108673   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:22.108682   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:22.108740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:22.145837   67149 cri.go:89] found id: ""
	I1028 18:32:22.145860   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.145867   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:22.145873   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:22.145928   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:22.183830   67149 cri.go:89] found id: ""
	I1028 18:32:22.183855   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.183864   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:22.183869   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:22.183917   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:22.221402   67149 cri.go:89] found id: ""
	I1028 18:32:22.221423   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.221430   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:22.221436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:22.221484   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:22.262193   67149 cri.go:89] found id: ""
	I1028 18:32:22.262220   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.262229   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:22.262234   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:22.262297   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:22.298774   67149 cri.go:89] found id: ""
	I1028 18:32:22.298797   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.298808   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:22.298819   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:22.298831   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:22.348677   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:22.348716   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:22.362199   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:22.362220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:22.429304   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:22.429327   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:22.429345   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:22.511591   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:22.511623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.049119   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:25.063910   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:25.063970   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:25.099795   67149 cri.go:89] found id: ""
	I1028 18:32:25.099822   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.099833   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:25.099840   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:25.099898   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:25.137957   67149 cri.go:89] found id: ""
	I1028 18:32:25.137985   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.137995   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:25.138002   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:25.138063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:25.174687   67149 cri.go:89] found id: ""
	I1028 18:32:25.174715   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.174726   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:25.174733   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:25.174795   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:25.207039   67149 cri.go:89] found id: ""
	I1028 18:32:25.207067   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.207077   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:25.207084   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:25.207130   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:25.239961   67149 cri.go:89] found id: ""
	I1028 18:32:25.239990   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.239998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:25.240004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:25.240055   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:25.273823   67149 cri.go:89] found id: ""
	I1028 18:32:25.273848   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.273858   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:25.273865   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:25.273925   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:25.310725   67149 cri.go:89] found id: ""
	I1028 18:32:25.310754   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.310765   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:25.310772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:25.310830   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:25.348724   67149 cri.go:89] found id: ""
	I1028 18:32:25.348749   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.348760   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:25.348770   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:25.348784   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:25.430213   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:25.430243   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.472233   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:25.472263   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:25.525648   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:25.525676   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:25.538697   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:25.538721   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:25.606779   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.107877   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:28.122241   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:28.122296   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:28.157042   67149 cri.go:89] found id: ""
	I1028 18:32:28.157070   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.157082   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:28.157089   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:28.157142   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:28.190625   67149 cri.go:89] found id: ""
	I1028 18:32:28.190648   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.190658   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:28.190666   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:28.190724   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:28.224528   67149 cri.go:89] found id: ""
	I1028 18:32:28.224551   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.224559   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:28.224565   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:28.224609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:28.265073   67149 cri.go:89] found id: ""
	I1028 18:32:28.265100   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.265110   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:28.265116   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:28.265174   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:28.302598   67149 cri.go:89] found id: ""
	I1028 18:32:28.302623   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.302633   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:28.302640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:28.302697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:28.339757   67149 cri.go:89] found id: ""
	I1028 18:32:28.339781   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.339789   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:28.339794   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:28.339846   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:28.375185   67149 cri.go:89] found id: ""
	I1028 18:32:28.375213   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.375224   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:28.375231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:28.375294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:28.413292   67149 cri.go:89] found id: ""
	I1028 18:32:28.413316   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.413334   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:28.413344   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:28.413376   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:28.464069   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:28.464098   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:28.478275   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:28.478299   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:28.546483   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.546504   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:28.546515   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:28.623015   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:28.623041   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.161570   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:31.175056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:31.175119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:31.210163   67149 cri.go:89] found id: ""
	I1028 18:32:31.210187   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.210199   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:31.210207   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:31.210264   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:31.244605   67149 cri.go:89] found id: ""
	I1028 18:32:31.244630   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.244637   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:31.244643   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:31.244688   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:31.280793   67149 cri.go:89] found id: ""
	I1028 18:32:31.280818   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.280827   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:31.280833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:31.280890   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:31.314616   67149 cri.go:89] found id: ""
	I1028 18:32:31.314641   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.314649   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:31.314654   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:31.314709   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:31.349386   67149 cri.go:89] found id: ""
	I1028 18:32:31.349410   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.349417   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:31.349423   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:31.349469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:31.382831   67149 cri.go:89] found id: ""
	I1028 18:32:31.382861   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.382871   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:31.382879   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:31.382924   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:31.417365   67149 cri.go:89] found id: ""
	I1028 18:32:31.417391   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.417400   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:31.417410   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:31.417469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:31.450631   67149 cri.go:89] found id: ""
	I1028 18:32:31.450660   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.450672   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:31.450683   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:31.450697   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.488932   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:31.488959   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:31.539335   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:31.539361   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:31.552304   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:31.552328   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:31.629291   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:31.629308   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:31.629323   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.207517   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:34.221231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:34.221310   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:34.255342   67149 cri.go:89] found id: ""
	I1028 18:32:34.255365   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.255373   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:34.255379   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:34.255438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:34.303802   67149 cri.go:89] found id: ""
	I1028 18:32:34.303827   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.303836   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:34.303843   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:34.303896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:34.339531   67149 cri.go:89] found id: ""
	I1028 18:32:34.339568   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.339579   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:34.339589   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:34.339653   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:34.374063   67149 cri.go:89] found id: ""
	I1028 18:32:34.374084   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.374094   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:34.374102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:34.374155   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:34.410880   67149 cri.go:89] found id: ""
	I1028 18:32:34.410909   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.410918   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:34.410924   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:34.410971   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:34.445372   67149 cri.go:89] found id: ""
	I1028 18:32:34.445397   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.445408   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:34.445416   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:34.445474   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:34.477820   67149 cri.go:89] found id: ""
	I1028 18:32:34.477844   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.477851   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:34.477857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:34.477909   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:34.517581   67149 cri.go:89] found id: ""
	I1028 18:32:34.517602   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.517609   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:34.517618   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:34.517632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:34.530407   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:34.530430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:34.599055   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:34.599083   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:34.599096   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.681579   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:34.681612   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:34.720523   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:34.720550   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.272697   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:37.289091   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:37.289159   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:37.321600   67149 cri.go:89] found id: ""
	I1028 18:32:37.321628   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.321639   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:37.321647   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:37.321704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:37.353296   67149 cri.go:89] found id: ""
	I1028 18:32:37.353324   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.353337   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:37.353343   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:37.353400   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:37.386299   67149 cri.go:89] found id: ""
	I1028 18:32:37.386321   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.386328   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:37.386333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:37.386401   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:37.420992   67149 cri.go:89] found id: ""
	I1028 18:32:37.421026   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.421039   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:37.421047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:37.421117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:37.456174   67149 cri.go:89] found id: ""
	I1028 18:32:37.456206   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.456217   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:37.456224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:37.456284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:37.491796   67149 cri.go:89] found id: ""
	I1028 18:32:37.491819   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.491827   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:37.491833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:37.491878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:37.529002   67149 cri.go:89] found id: ""
	I1028 18:32:37.529028   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.529039   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:37.529047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:37.529111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:37.568967   67149 cri.go:89] found id: ""
	I1028 18:32:37.568993   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.569001   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:37.569010   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:37.569022   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:37.640041   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:37.640065   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:37.640076   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:37.725490   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:37.725524   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:37.771858   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:37.771879   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.821240   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:37.821271   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.334946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:40.349147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:40.349216   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:40.383931   67149 cri.go:89] found id: ""
	I1028 18:32:40.383956   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.383966   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:40.383973   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:40.384028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:40.419877   67149 cri.go:89] found id: ""
	I1028 18:32:40.419905   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.419915   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:40.419922   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:40.419978   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:40.453659   67149 cri.go:89] found id: ""
	I1028 18:32:40.453681   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.453689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:40.453695   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:40.453744   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:40.486299   67149 cri.go:89] found id: ""
	I1028 18:32:40.486326   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.486343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:40.486350   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:40.486407   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:40.518309   67149 cri.go:89] found id: ""
	I1028 18:32:40.518334   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.518344   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:40.518351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:40.518402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:40.549008   67149 cri.go:89] found id: ""
	I1028 18:32:40.549040   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.549049   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:40.549055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:40.549108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:40.586157   67149 cri.go:89] found id: ""
	I1028 18:32:40.586177   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.586184   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:40.586189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:40.586232   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:40.621107   67149 cri.go:89] found id: ""
	I1028 18:32:40.621133   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.621144   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:40.621153   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:40.621164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.633793   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:40.633816   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:40.700370   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:40.700393   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:40.700405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:40.780964   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:40.780993   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:40.819904   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:40.819928   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.371487   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:43.384387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:43.384445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:43.419889   67149 cri.go:89] found id: ""
	I1028 18:32:43.419922   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.419931   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:43.419937   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:43.419997   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:43.455177   67149 cri.go:89] found id: ""
	I1028 18:32:43.455209   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.455219   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:43.455227   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:43.455295   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:43.493070   67149 cri.go:89] found id: ""
	I1028 18:32:43.493094   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.493104   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:43.493111   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:43.493170   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:43.526164   67149 cri.go:89] found id: ""
	I1028 18:32:43.526191   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.526199   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:43.526205   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:43.526254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:43.559225   67149 cri.go:89] found id: ""
	I1028 18:32:43.559252   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.559263   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:43.559270   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:43.559323   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:43.597178   67149 cri.go:89] found id: ""
	I1028 18:32:43.597198   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.597206   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:43.597212   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:43.597276   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:43.633179   67149 cri.go:89] found id: ""
	I1028 18:32:43.633200   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.633209   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:43.633214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:43.633290   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:43.669567   67149 cri.go:89] found id: ""
	I1028 18:32:43.669596   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.669605   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:43.669615   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:43.669631   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:43.737618   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:43.737638   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:43.737650   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:43.821394   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:43.821425   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:43.859924   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:43.859950   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.913539   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:43.913566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:46.429021   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:46.443137   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:46.443197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:46.480363   67149 cri.go:89] found id: ""
	I1028 18:32:46.480385   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.480394   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:46.480400   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:46.480452   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:46.514702   67149 cri.go:89] found id: ""
	I1028 18:32:46.514731   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.514738   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:46.514744   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:46.514796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:46.546829   67149 cri.go:89] found id: ""
	I1028 18:32:46.546857   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.546868   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:46.546874   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:46.546920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:46.580372   67149 cri.go:89] found id: ""
	I1028 18:32:46.580398   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.580407   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:46.580415   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:46.580491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:46.615455   67149 cri.go:89] found id: ""
	I1028 18:32:46.615479   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.615489   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:46.615497   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:46.615556   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:46.649547   67149 cri.go:89] found id: ""
	I1028 18:32:46.649570   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.649577   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:46.649583   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:46.649641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:46.684744   67149 cri.go:89] found id: ""
	I1028 18:32:46.684768   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.684779   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:46.684787   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:46.684852   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:46.725530   67149 cri.go:89] found id: ""
	I1028 18:32:46.725558   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.725569   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:46.725578   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:46.725592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:46.794487   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:46.794506   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:46.794517   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:46.881407   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:46.881438   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:46.921649   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:46.921671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:46.972915   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:46.972947   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.486835   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:49.501445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:49.501509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:49.537356   67149 cri.go:89] found id: ""
	I1028 18:32:49.537377   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.537384   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:49.537389   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:49.537443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:49.568514   67149 cri.go:89] found id: ""
	I1028 18:32:49.568541   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.568549   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:49.568555   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:49.568610   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:49.602300   67149 cri.go:89] found id: ""
	I1028 18:32:49.602324   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.602333   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:49.602342   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:49.602390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:49.640326   67149 cri.go:89] found id: ""
	I1028 18:32:49.640356   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.640366   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:49.640376   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:49.640437   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:49.675145   67149 cri.go:89] found id: ""
	I1028 18:32:49.675175   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.675183   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:49.675189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:49.675235   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:49.711104   67149 cri.go:89] found id: ""
	I1028 18:32:49.711129   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.711139   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:49.711147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:49.711206   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:49.748316   67149 cri.go:89] found id: ""
	I1028 18:32:49.748366   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.748378   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:49.748385   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:49.748441   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:49.781620   67149 cri.go:89] found id: ""
	I1028 18:32:49.781646   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.781656   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:49.781665   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:49.781679   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.795119   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:49.795143   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:49.870438   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:49.870519   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:49.870539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:49.956845   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:49.956875   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:49.993067   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:49.993097   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:52.543260   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:52.556524   67149 kubeadm.go:597] duration metric: took 4m2.404527005s to restartPrimaryControlPlane
	W1028 18:32:52.556602   67149 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:52.556639   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:32:53.011065   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:32:53.026226   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:32:53.035868   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:32:53.045257   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:32:53.045271   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:32:53.045302   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:32:53.054383   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:32:53.054430   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:32:53.063665   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:32:53.073006   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:32:53.073054   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:32:53.083156   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.092700   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:32:53.092742   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.102374   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:32:53.112072   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:32:53.112121   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:32:53.122102   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:32:53.347625   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:49.381931   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:34:49.382111   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:34:49.383570   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:34:49.383633   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:49.383732   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:49.383859   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:49.383975   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:34:49.384073   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:49.385654   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:49.385757   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:49.385847   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:49.385937   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:49.386008   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:49.386118   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:49.386214   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:49.386316   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:49.386391   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:49.386478   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:49.386597   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:49.386643   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:49.386724   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:49.386813   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:49.386891   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:49.386983   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:49.387070   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:49.387209   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:49.387330   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:49.387389   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:49.387474   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:49.389653   67149 out.go:235]   - Booting up control plane ...
	I1028 18:34:49.389760   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:49.389867   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:49.389971   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:49.390088   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:49.390228   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:34:49.390277   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:34:49.390355   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390550   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390645   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390832   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390903   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391069   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391163   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391354   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391452   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391649   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391657   67149 kubeadm.go:310] 
	I1028 18:34:49.391691   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:34:49.391743   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:34:49.391758   67149 kubeadm.go:310] 
	I1028 18:34:49.391789   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:34:49.391822   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:34:49.391908   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:34:49.391914   67149 kubeadm.go:310] 
	I1028 18:34:49.392024   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:34:49.392073   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:34:49.392133   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:34:49.392142   67149 kubeadm.go:310] 
	I1028 18:34:49.392267   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:34:49.392363   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:34:49.392380   67149 kubeadm.go:310] 
	I1028 18:34:49.392525   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:34:49.392629   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:34:49.392737   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:34:49.392830   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:34:49.392879   67149 kubeadm.go:310] 
	W1028 18:34:49.392949   67149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:34:49.392991   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:34:49.869859   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:49.884524   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:49.896293   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:49.896318   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:49.896354   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:49.907312   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:49.907364   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:49.917926   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:49.928001   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:49.928048   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:49.938687   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.949217   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:49.949268   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.959955   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:49.970105   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:49.970156   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:49.980760   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:50.212973   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:36:46.686631   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:36:46.686753   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:36:46.688224   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:36:46.688325   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:36:46.688449   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:36:46.688587   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:36:46.688726   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:36:46.688813   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:36:46.690320   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:36:46.690427   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:36:46.690524   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:36:46.690627   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:36:46.690720   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:36:46.690824   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:36:46.690897   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:36:46.690984   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:36:46.691064   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:36:46.691161   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:36:46.691253   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:36:46.691309   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:36:46.691379   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:36:46.691426   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:36:46.691471   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:36:46.691547   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:36:46.691619   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:36:46.691713   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:36:46.691814   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:36:46.691864   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:36:46.691951   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:36:46.693258   67149 out.go:235]   - Booting up control plane ...
	I1028 18:36:46.693374   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:36:46.693471   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:36:46.693566   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:36:46.693682   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:36:46.693870   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:36:46.693930   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:36:46.694023   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694253   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694343   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694527   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694614   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694798   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694894   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695053   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695119   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695315   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695324   67149 kubeadm.go:310] 
	I1028 18:36:46.695357   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:36:46.695392   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:36:46.695398   67149 kubeadm.go:310] 
	I1028 18:36:46.695427   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:36:46.695456   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:36:46.695542   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:36:46.695549   67149 kubeadm.go:310] 
	I1028 18:36:46.695665   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:36:46.695717   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:36:46.695767   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:36:46.695781   67149 kubeadm.go:310] 
	I1028 18:36:46.695921   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:36:46.696037   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:36:46.696048   67149 kubeadm.go:310] 
	I1028 18:36:46.696177   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:36:46.696285   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:36:46.696390   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:36:46.696512   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:36:46.696560   67149 kubeadm.go:310] 
	I1028 18:36:46.696579   67149 kubeadm.go:394] duration metric: took 7m56.601380499s to StartCluster
	I1028 18:36:46.696618   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:36:46.696670   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:36:46.738714   67149 cri.go:89] found id: ""
	I1028 18:36:46.738741   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.738749   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:36:46.738757   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:36:46.738822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:36:46.772906   67149 cri.go:89] found id: ""
	I1028 18:36:46.772934   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.772944   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:36:46.772951   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:36:46.773028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:36:46.808785   67149 cri.go:89] found id: ""
	I1028 18:36:46.808809   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.808819   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:36:46.808827   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:36:46.808884   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:36:46.842977   67149 cri.go:89] found id: ""
	I1028 18:36:46.843007   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.843016   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:36:46.843022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:36:46.843095   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:36:46.878121   67149 cri.go:89] found id: ""
	I1028 18:36:46.878148   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.878159   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:36:46.878166   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:36:46.878231   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:36:46.911953   67149 cri.go:89] found id: ""
	I1028 18:36:46.911977   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.911984   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:36:46.911990   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:36:46.912054   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:36:46.944291   67149 cri.go:89] found id: ""
	I1028 18:36:46.944317   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.944324   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:36:46.944329   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:36:46.944379   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:36:46.976525   67149 cri.go:89] found id: ""
	I1028 18:36:46.976554   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.976564   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:36:46.976575   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:36:46.976588   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:36:47.026517   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:36:47.026544   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:36:47.041198   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:36:47.041231   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:36:47.115650   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:36:47.115681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:36:47.115695   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:36:47.218059   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:36:47.218093   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 18:36:47.257114   67149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:36:47.257182   67149 out.go:270] * 
	* 
	W1028 18:36:47.257240   67149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.257280   67149 out.go:270] * 
	* 
	W1028 18:36:47.258088   67149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:36:47.261521   67149 out.go:201] 
	W1028 18:36:47.262707   67149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.262742   67149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:36:47.262760   67149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:36:47.264073   67149 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-223868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (229.527519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-223868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-223868 logs -n 25: (1.497818152s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-703793                              | running-upgrade-703793       | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-021370            | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:25:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:25:35.146308   67489 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:25:35.146467   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146474   67489 out.go:358] Setting ErrFile to fd 2...
	I1028 18:25:35.146480   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146973   67489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:25:35.147825   67489 out.go:352] Setting JSON to false
	I1028 18:25:35.148718   67489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7678,"bootTime":1730132257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:25:35.148810   67489 start.go:139] virtualization: kvm guest
	I1028 18:25:35.150695   67489 out.go:177] * [default-k8s-diff-port-692033] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:25:35.151797   67489 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:25:35.151797   67489 notify.go:220] Checking for updates...
	I1028 18:25:35.154193   67489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:25:35.155491   67489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:25:35.156576   67489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:25:35.157619   67489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:25:35.158702   67489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:25:35.160202   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:25:35.160602   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.160658   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.175095   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I1028 18:25:35.175421   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.175848   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.175863   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.176187   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.176387   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.176667   67489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:25:35.177210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.177325   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.191270   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I1028 18:25:35.191687   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.192092   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.192114   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.192388   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.192551   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.222738   67489 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:25:35.223900   67489 start.go:297] selected driver: kvm2
	I1028 18:25:35.223910   67489 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.224018   67489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:25:35.224696   67489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.224770   67489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:25:35.238839   67489 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:25:35.239228   67489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:25:35.239258   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:25:35.239310   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:25:35.239360   67489 start.go:340] cluster config:
	{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.239480   67489 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.241175   67489 out.go:177] * Starting "default-k8s-diff-port-692033" primary control-plane node in "default-k8s-diff-port-692033" cluster
	I1028 18:25:37.248702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:35.242393   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:25:35.242423   67489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:25:35.242432   67489 cache.go:56] Caching tarball of preloaded images
	I1028 18:25:35.242504   67489 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:25:35.242517   67489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:25:35.242600   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:25:35.242763   67489 start.go:360] acquireMachinesLock for default-k8s-diff-port-692033: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:25:40.320712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:46.400713   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:49.472709   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:55.552712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:58.624703   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:04.704707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:07.776740   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:13.856735   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:16.928744   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:23.008721   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:26.080668   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:32.160706   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:35.232663   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:41.312774   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:44.384739   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:50.464729   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:53.536702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:59.616750   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:02.688719   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:08.768731   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:11.840771   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:17.920756   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:20.992753   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:27.072785   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:30.144726   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:36.224704   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:39.296825   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:45.376692   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:48.448699   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:54.528707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:57.600754   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:28:00.605468   66801 start.go:364] duration metric: took 4m12.368996576s to acquireMachinesLock for "no-preload-051152"
	I1028 18:28:00.605517   66801 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:00.605525   66801 fix.go:54] fixHost starting: 
	I1028 18:28:00.605815   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:00.605850   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:00.621828   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1028 18:28:00.622237   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:00.622654   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:28:00.622674   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:00.622975   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:00.623150   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:00.623272   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:28:00.624880   66801 fix.go:112] recreateIfNeeded on no-preload-051152: state=Stopped err=<nil>
	I1028 18:28:00.624910   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	W1028 18:28:00.625076   66801 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:00.627065   66801 out.go:177] * Restarting existing kvm2 VM for "no-preload-051152" ...
	I1028 18:28:00.603089   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:00.603122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603425   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:28:00.603450   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603663   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:28:00.605343   66600 machine.go:96] duration metric: took 4m37.432159141s to provisionDockerMachine
	I1028 18:28:00.605380   66600 fix.go:56] duration metric: took 4m37.452432846s for fixHost
	I1028 18:28:00.605387   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 4m37.452449736s
	W1028 18:28:00.605419   66600 start.go:714] error starting host: provision: host is not running
	W1028 18:28:00.605517   66600 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 18:28:00.605528   66600 start.go:729] Will try again in 5 seconds ...
	I1028 18:28:00.628172   66801 main.go:141] libmachine: (no-preload-051152) Calling .Start
	I1028 18:28:00.628308   66801 main.go:141] libmachine: (no-preload-051152) Ensuring networks are active...
	I1028 18:28:00.629123   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network default is active
	I1028 18:28:00.629467   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network mk-no-preload-051152 is active
	I1028 18:28:00.629782   66801 main.go:141] libmachine: (no-preload-051152) Getting domain xml...
	I1028 18:28:00.630687   66801 main.go:141] libmachine: (no-preload-051152) Creating domain...
	I1028 18:28:01.819872   66801 main.go:141] libmachine: (no-preload-051152) Waiting to get IP...
	I1028 18:28:01.820792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:01.821214   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:01.821287   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:01.821204   68016 retry.go:31] will retry after 269.081621ms: waiting for machine to come up
	I1028 18:28:02.091799   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.092220   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.092242   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.092175   68016 retry.go:31] will retry after 341.926163ms: waiting for machine to come up
	I1028 18:28:02.435679   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.436035   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.436067   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.435982   68016 retry.go:31] will retry after 355.739166ms: waiting for machine to come up
	I1028 18:28:02.793549   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.793928   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.793953   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.793881   68016 retry.go:31] will retry after 496.396184ms: waiting for machine to come up
	I1028 18:28:05.607678   66600 start.go:360] acquireMachinesLock for embed-certs-021370: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:28:03.291568   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.292038   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.292068   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.291978   68016 retry.go:31] will retry after 561.311245ms: waiting for machine to come up
	I1028 18:28:03.854782   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.855137   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.855166   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.855088   68016 retry.go:31] will retry after 574.675969ms: waiting for machine to come up
	I1028 18:28:04.431784   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:04.432226   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:04.432250   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:04.432177   68016 retry.go:31] will retry after 1.028136295s: waiting for machine to come up
	I1028 18:28:05.461477   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:05.461839   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:05.461869   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:05.461795   68016 retry.go:31] will retry after 955.343831ms: waiting for machine to come up
	I1028 18:28:06.418161   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:06.418629   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:06.418659   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:06.418576   68016 retry.go:31] will retry after 1.615930502s: waiting for machine to come up
	I1028 18:28:08.036275   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:08.036641   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:08.036662   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:08.036615   68016 retry.go:31] will retry after 2.111463198s: waiting for machine to come up
	I1028 18:28:10.150891   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:10.151403   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:10.151429   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:10.151351   68016 retry.go:31] will retry after 2.35232289s: waiting for machine to come up
	I1028 18:28:12.506070   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:12.506471   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:12.506494   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:12.506447   68016 retry.go:31] will retry after 2.874687772s: waiting for machine to come up
	I1028 18:28:15.384360   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:15.384680   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:15.384712   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:15.384636   68016 retry.go:31] will retry after 3.299950406s: waiting for machine to come up
	I1028 18:28:19.893083   67149 start.go:364] duration metric: took 3m43.747535803s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:28:19.893161   67149 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:19.893170   67149 fix.go:54] fixHost starting: 
	I1028 18:28:19.893556   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:19.893608   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:19.909857   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I1028 18:28:19.910215   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:19.910669   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:28:19.910690   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:19.911049   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:19.911241   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:19.911395   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:28:19.912825   67149 fix.go:112] recreateIfNeeded on old-k8s-version-223868: state=Stopped err=<nil>
	I1028 18:28:19.912856   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	W1028 18:28:19.912996   67149 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:19.915041   67149 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	I1028 18:28:19.916422   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .Start
	I1028 18:28:19.916611   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:28:19.917295   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:28:19.917560   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:28:19.917951   67149 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:28:19.918628   67149 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:28:18.688243   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.688710   66801 main.go:141] libmachine: (no-preload-051152) Found IP for machine: 192.168.61.78
	I1028 18:28:18.688738   66801 main.go:141] libmachine: (no-preload-051152) Reserving static IP address...
	I1028 18:28:18.688754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has current primary IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.689151   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.689174   66801 main.go:141] libmachine: (no-preload-051152) Reserved static IP address: 192.168.61.78
	I1028 18:28:18.689188   66801 main.go:141] libmachine: (no-preload-051152) DBG | skip adding static IP to network mk-no-preload-051152 - found existing host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"}
	I1028 18:28:18.689198   66801 main.go:141] libmachine: (no-preload-051152) Waiting for SSH to be available...
	I1028 18:28:18.689217   66801 main.go:141] libmachine: (no-preload-051152) DBG | Getting to WaitForSSH function...
	I1028 18:28:18.691372   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691721   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.691754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691861   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH client type: external
	I1028 18:28:18.691890   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa (-rw-------)
	I1028 18:28:18.691950   66801 main.go:141] libmachine: (no-preload-051152) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:18.691967   66801 main.go:141] libmachine: (no-preload-051152) DBG | About to run SSH command:
	I1028 18:28:18.691979   66801 main.go:141] libmachine: (no-preload-051152) DBG | exit 0
	I1028 18:28:18.816169   66801 main.go:141] libmachine: (no-preload-051152) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:18.816571   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetConfigRaw
	I1028 18:28:18.817209   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:18.819569   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.819891   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.819913   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.820164   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:28:18.820375   66801 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:18.820392   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:18.820618   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.822580   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.822953   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.822983   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.823096   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.823250   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823390   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823537   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.823687   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.823878   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.823890   66801 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:18.932489   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:18.932516   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.932769   66801 buildroot.go:166] provisioning hostname "no-preload-051152"
	I1028 18:28:18.932798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.933003   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.935565   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.935938   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.935965   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.936147   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.936346   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936513   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936674   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.936838   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.936994   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.937006   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-051152 && echo "no-preload-051152" | sudo tee /etc/hostname
	I1028 18:28:19.057840   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-051152
	
	I1028 18:28:19.057872   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.060536   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.060917   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.060946   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.061068   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.061237   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061405   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061544   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.061700   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.061848   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.061863   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-051152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-051152/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-051152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:19.180890   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:19.180920   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:19.180957   66801 buildroot.go:174] setting up certificates
	I1028 18:28:19.180971   66801 provision.go:84] configureAuth start
	I1028 18:28:19.180985   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:19.181299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.183792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184144   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.184172   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184309   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.186298   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186588   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.186616   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186722   66801 provision.go:143] copyHostCerts
	I1028 18:28:19.186790   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:19.186804   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:19.186868   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:19.186974   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:19.186986   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:19.187023   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:19.187107   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:19.187115   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:19.187146   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:19.187197   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.no-preload-051152 san=[127.0.0.1 192.168.61.78 localhost minikube no-preload-051152]
	I1028 18:28:19.275109   66801 provision.go:177] copyRemoteCerts
	I1028 18:28:19.275175   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:19.275200   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.278392   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.278946   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.278978   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.279183   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.279454   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.279651   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.279789   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.362094   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:19.384635   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:28:19.406649   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:19.428807   66801 provision.go:87] duration metric: took 247.825267ms to configureAuth
	I1028 18:28:19.428830   66801 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:19.429026   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:28:19.429090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.431615   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.431928   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.431954   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.432090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.432278   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432434   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432602   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.432786   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.432932   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.432946   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:19.655137   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:19.655163   66801 machine.go:96] duration metric: took 834.775161ms to provisionDockerMachine
	I1028 18:28:19.655175   66801 start.go:293] postStartSetup for "no-preload-051152" (driver="kvm2")
	I1028 18:28:19.655185   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:19.655199   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.655509   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:19.655532   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.658099   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658411   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.658442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658566   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.658744   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.658884   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.659013   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.743030   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:19.746986   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:19.747007   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:19.747081   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:19.747177   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:19.747290   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:19.756378   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:19.779243   66801 start.go:296] duration metric: took 124.056855ms for postStartSetup
	I1028 18:28:19.779283   66801 fix.go:56] duration metric: took 19.173756385s for fixHost
	I1028 18:28:19.779305   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.781887   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782205   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.782226   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782367   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.782557   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782709   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782836   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.782999   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.783180   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.783191   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:19.892920   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140099.866892804
	
	I1028 18:28:19.892944   66801 fix.go:216] guest clock: 1730140099.866892804
	I1028 18:28:19.892954   66801 fix.go:229] Guest: 2024-10-28 18:28:19.866892804 +0000 UTC Remote: 2024-10-28 18:28:19.779287594 +0000 UTC m=+271.674302547 (delta=87.60521ms)
	I1028 18:28:19.892997   66801 fix.go:200] guest clock delta is within tolerance: 87.60521ms
	I1028 18:28:19.893008   66801 start.go:83] releasing machines lock for "no-preload-051152", held for 19.287505767s
	I1028 18:28:19.893034   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.893299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.895775   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896177   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.896204   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896362   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.896826   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897023   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897133   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:19.897171   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.897267   66801 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:19.897291   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.899703   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.899995   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900031   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900054   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900208   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900374   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900416   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900550   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.900626   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900707   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.900818   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900944   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.901098   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.982201   66801 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:20.008913   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:20.157816   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:20.165773   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:20.165837   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:20.187342   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:20.187359   66801 start.go:495] detecting cgroup driver to use...
	I1028 18:28:20.187423   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:20.204825   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:20.220702   66801 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:20.220776   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:20.238812   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:20.253664   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:20.363567   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:20.534475   66801 docker.go:233] disabling docker service ...
	I1028 18:28:20.534564   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:20.548424   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:20.564292   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:20.687135   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:20.796225   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:20.810327   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:20.828804   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:28:20.828866   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.838719   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:20.838768   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.849166   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.862811   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.875223   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:20.885402   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.895602   66801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.914163   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.924194   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:20.934907   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:20.934958   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:20.948898   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:20.958955   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:21.069438   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:21.175294   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:21.175379   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:21.179886   66801 start.go:563] Will wait 60s for crictl version
	I1028 18:28:21.179942   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.184195   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:21.226939   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:21.227043   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.254702   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.284607   66801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:28:21.285906   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:21.288560   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.288918   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:21.288945   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.289132   66801 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:21.293108   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:21.307303   66801 kubeadm.go:883] updating cluster {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:21.307447   66801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:28:21.307495   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:21.347493   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:28:21.347520   66801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:21.347595   66801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.347609   66801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.347621   66801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.347656   66801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.347690   66801 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 18:28:21.347691   66801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.347758   66801 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.347695   66801 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349312   66801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.349387   66801 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 18:28:21.349402   66801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.349526   66801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.349574   66801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.349582   66801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.349632   66801 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349311   66801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.515246   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.515760   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.543817   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 18:28:21.551755   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.562433   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.594208   66801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 18:28:21.594257   66801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.594291   66801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 18:28:21.594317   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.594323   66801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.594364   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.666046   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.666654   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.757831   66801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 18:28:21.757867   66801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.757867   66801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 18:28:21.757894   66801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.757914   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757926   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.757937   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757982   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.758142   66801 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 18:28:21.758161   66801 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 18:28:21.758197   66801 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.758169   66801 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.758234   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.758270   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.813746   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.813792   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.813836   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.813837   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.813840   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.813890   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.934434   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.958229   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.958287   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.958377   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.958381   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.958467   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.053179   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 18:28:22.053304   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.053351   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 18:28:22.053447   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:22.087756   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:22.087762   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:22.087826   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:22.087867   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.087897   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 18:28:22.087907   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087938   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087942   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 18:28:22.161136   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 18:28:22.161259   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:22.201924   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 18:28:22.201967   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 18:28:22.202032   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:22.202068   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:21.207941   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:28:21.209066   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.209518   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.209604   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.209495   68155 retry.go:31] will retry after 258.02952ms: waiting for machine to come up
	I1028 18:28:21.468599   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.469034   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.469052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.468996   68155 retry.go:31] will retry after 389.053264ms: waiting for machine to come up
	I1028 18:28:21.859493   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.859987   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.860017   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.859929   68155 retry.go:31] will retry after 454.438888ms: waiting for machine to come up
	I1028 18:28:22.315484   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.315961   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.315988   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.315904   68155 retry.go:31] will retry after 531.549561ms: waiting for machine to come up
	I1028 18:28:22.849247   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.849736   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.849791   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.849693   68155 retry.go:31] will retry after 602.202649ms: waiting for machine to come up
	I1028 18:28:23.453311   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:23.453859   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:23.453887   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:23.453796   68155 retry.go:31] will retry after 836.622626ms: waiting for machine to come up
	I1028 18:28:24.291959   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:24.292286   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:24.292315   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:24.292252   68155 retry.go:31] will retry after 1.187276744s: waiting for machine to come up
	I1028 18:28:25.480962   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:25.481398   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:25.481417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:25.481350   68155 retry.go:31] will retry after 1.417127806s: waiting for machine to come up
	I1028 18:28:23.586400   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.127903   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3: (2.040063682s)
	I1028 18:28:24.127962   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 18:28:24.127967   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (1.966690859s)
	I1028 18:28:24.127991   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 18:28:24.128010   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.925953727s)
	I1028 18:28:24.128034   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.925947261s)
	I1028 18:28:24.128041   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 18:28:24.128048   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 18:28:24.127904   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.03994028s)
	I1028 18:28:24.128069   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:24.128085   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 18:28:24.128109   66801 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 18:28:24.128123   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.128138   66801 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.128166   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:24.128180   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.132734   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 18:28:26.097200   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.9689964s)
	I1028 18:28:26.097240   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 18:28:26.097241   66801 ssh_runner.go:235] Completed: which crictl: (1.969052863s)
	I1028 18:28:26.097264   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.097308   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:26.097309   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.900944   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:26.901481   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:26.901511   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:26.901426   68155 retry.go:31] will retry after 1.766762252s: waiting for machine to come up
	I1028 18:28:28.670334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:28.670798   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:28.670827   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:28.670742   68155 retry.go:31] will retry after 2.287152926s: waiting for machine to come up
	I1028 18:28:30.959639   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:30.959947   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:30.959963   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:30.959917   68155 retry.go:31] will retry after 1.799223833s: waiting for machine to come up
	I1028 18:28:28.165293   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.067952153s)
	I1028 18:28:28.165410   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:28.165497   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.068111312s)
	I1028 18:28:28.165523   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 18:28:28.165548   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.165591   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.208189   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:30.152411   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.986796263s)
	I1028 18:28:30.152458   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 18:28:30.152496   66801 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152504   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.944281988s)
	I1028 18:28:30.152550   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152556   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 18:28:30.152652   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:32.761498   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:32.761941   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:32.761968   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:32.761894   68155 retry.go:31] will retry after 2.231065891s: waiting for machine to come up
	I1028 18:28:34.994438   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:34.994902   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:34.994936   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:34.994847   68155 retry.go:31] will retry after 4.079794439s: waiting for machine to come up
	I1028 18:28:33.842059   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.689484833s)
	I1028 18:28:33.842109   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 18:28:33.842138   66801 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:33.842155   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.68947822s)
	I1028 18:28:33.842184   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 18:28:33.842206   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:35.714458   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.872222439s)
	I1028 18:28:35.714493   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 18:28:35.714521   66801 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:35.714567   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:36.568124   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 18:28:36.568177   66801 cache_images.go:123] Successfully loaded all cached images
	I1028 18:28:36.568185   66801 cache_images.go:92] duration metric: took 15.220649269s to LoadCachedImages
	I1028 18:28:36.568199   66801 kubeadm.go:934] updating node { 192.168.61.78 8443 v1.31.2 crio true true} ...
	I1028 18:28:36.568310   66801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-051152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:36.568383   66801 ssh_runner.go:195] Run: crio config
	I1028 18:28:36.613400   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:36.613425   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:36.613435   66801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:36.613454   66801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.78 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-051152 NodeName:no-preload-051152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:28:36.613596   66801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-051152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:36.613669   66801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:28:36.624493   66801 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:36.624553   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:36.633828   66801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 18:28:36.649661   66801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:36.665454   66801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1028 18:28:36.681280   66801 ssh_runner.go:195] Run: grep 192.168.61.78	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:36.685010   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:36.697177   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:36.823266   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:36.840346   66801 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152 for IP: 192.168.61.78
	I1028 18:28:36.840366   66801 certs.go:194] generating shared ca certs ...
	I1028 18:28:36.840380   66801 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:36.840538   66801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:36.840578   66801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:36.840586   66801 certs.go:256] generating profile certs ...
	I1028 18:28:36.840661   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.key
	I1028 18:28:36.840722   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key.262d982c
	I1028 18:28:36.840758   66801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key
	I1028 18:28:36.840859   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:36.840892   66801 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:36.840902   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:36.840922   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:36.840943   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:36.840971   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:36.841025   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:36.841818   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:36.881548   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:36.907084   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:36.947810   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:36.976268   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 18:28:37.003795   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:28:37.036252   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:37.059731   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:28:37.083467   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:37.106397   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:37.128719   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:37.151133   66801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:37.166917   66801 ssh_runner.go:195] Run: openssl version
	I1028 18:28:37.172387   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:37.182117   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186329   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186389   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.191925   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:37.201799   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:37.211620   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215889   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215923   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.221588   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:37.231983   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:37.242291   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246869   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246904   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.252408   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:37.262946   66801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:37.267334   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:37.273164   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:37.278831   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:37.284778   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:37.290547   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:37.296195   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:37.301915   66801 kubeadm.go:392] StartCluster: {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:37.301986   66801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:37.302037   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.345115   66801 cri.go:89] found id: ""
	I1028 18:28:37.345185   66801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:37.355312   66801 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:37.355328   66801 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:37.355370   66801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:37.364777   66801 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:37.366056   66801 kubeconfig.go:125] found "no-preload-051152" server: "https://192.168.61.78:8443"
	I1028 18:28:37.368829   66801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:37.378010   66801 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.78
	I1028 18:28:37.378039   66801 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:37.378047   66801 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:37.378083   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.413442   66801 cri.go:89] found id: ""
	I1028 18:28:37.413522   66801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:37.428998   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:37.438365   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:37.438391   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:37.438442   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:37.447260   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:37.447310   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:37.456615   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:37.465292   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:37.465351   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:37.474382   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.482957   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:37.483012   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.491991   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:37.500635   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:37.500709   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:37.509632   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:37.518808   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:37.642796   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:40.421350   67489 start.go:364] duration metric: took 3m5.178550845s to acquireMachinesLock for "default-k8s-diff-port-692033"
	I1028 18:28:40.421416   67489 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:40.421430   67489 fix.go:54] fixHost starting: 
	I1028 18:28:40.421843   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:40.421894   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:40.439583   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I1028 18:28:40.440133   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:40.440679   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:28:40.440701   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:40.441025   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:40.441198   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:40.441359   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:28:40.443029   67489 fix.go:112] recreateIfNeeded on default-k8s-diff-port-692033: state=Stopped err=<nil>
	I1028 18:28:40.443055   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	W1028 18:28:40.443202   67489 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:40.445489   67489 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-692033" ...
	I1028 18:28:39.079052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079556   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079584   67149 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:28:39.079593   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:28:39.079888   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.079919   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | skip adding static IP to network mk-old-k8s-version-223868 - found existing host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"}
	I1028 18:28:39.079935   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:28:39.079955   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:28:39.079971   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:28:39.082041   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.082354   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082480   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:28:39.082500   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:28:39.082528   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:39.082555   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:28:39.082567   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:28:39.204523   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:39.204883   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:28:39.205526   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.208073   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208434   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.208478   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208709   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:28:39.208907   67149 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:39.208926   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:39.209144   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.211109   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211407   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.211437   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.211739   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.211888   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.212033   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.212218   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.212388   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.212398   67149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:39.316528   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:39.316566   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.316813   67149 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:28:39.316841   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.317028   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.319389   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319687   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.319713   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319836   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.320017   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320167   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320310   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.320458   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.320642   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.320656   67149 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:28:39.439149   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:28:39.439179   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.441957   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442268   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.442300   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442528   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.442736   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.442940   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.443122   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.443304   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.443525   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.443550   67149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:39.561619   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:39.561651   67149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:39.561702   67149 buildroot.go:174] setting up certificates
	I1028 18:28:39.561716   67149 provision.go:84] configureAuth start
	I1028 18:28:39.561731   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.562015   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.564838   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565195   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.565229   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565373   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.567875   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568262   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.568287   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568452   67149 provision.go:143] copyHostCerts
	I1028 18:28:39.568534   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:39.568553   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:39.568621   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:39.568745   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:39.568768   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:39.568810   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:39.568899   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:39.568911   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:39.568937   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:39.569006   67149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:28:39.786398   67149 provision.go:177] copyRemoteCerts
	I1028 18:28:39.786449   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:39.786482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.789048   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789373   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.789417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789535   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.789733   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.789884   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.790013   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:39.871816   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:39.902889   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:28:39.932633   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:39.958581   67149 provision.go:87] duration metric: took 396.851161ms to configureAuth
	I1028 18:28:39.958609   67149 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:39.958796   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:28:39.958881   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.961667   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962019   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.962044   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962240   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.962468   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962671   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962850   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.963037   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.963220   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.963239   67149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:40.179808   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:40.179843   67149 machine.go:96] duration metric: took 970.91659ms to provisionDockerMachine
	I1028 18:28:40.179857   67149 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:28:40.179869   67149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:40.179917   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.180287   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:40.180319   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.183011   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183383   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.183411   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183578   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.183770   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.183964   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.184114   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.270445   67149 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:40.275798   67149 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:40.275825   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:40.275898   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:40.275995   67149 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:40.276108   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:40.287529   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:40.310860   67149 start.go:296] duration metric: took 130.989944ms for postStartSetup
	I1028 18:28:40.310899   67149 fix.go:56] duration metric: took 20.417730265s for fixHost
	I1028 18:28:40.310925   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.313613   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.313967   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.314000   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.314175   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.314354   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314518   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314692   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.314862   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:40.315021   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:40.315032   67149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:40.421204   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140120.384024791
	
	I1028 18:28:40.421225   67149 fix.go:216] guest clock: 1730140120.384024791
	I1028 18:28:40.421235   67149 fix.go:229] Guest: 2024-10-28 18:28:40.384024791 +0000 UTC Remote: 2024-10-28 18:28:40.310903937 +0000 UTC m=+244.300202669 (delta=73.120854ms)
	I1028 18:28:40.421259   67149 fix.go:200] guest clock delta is within tolerance: 73.120854ms
	I1028 18:28:40.421265   67149 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 20.528130845s
	I1028 18:28:40.421297   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.421574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:40.424700   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425088   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.425116   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425286   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.425971   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426188   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426266   67149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:40.426340   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.426604   67149 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:40.426632   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.429408   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429569   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429807   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.429841   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429950   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430059   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.430092   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.430123   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430236   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430383   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430459   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430616   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.430614   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.509203   67149 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:40.540019   67149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:40.701732   67149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:40.710264   67149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:40.710354   67149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:40.731373   67149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:40.731398   67149 start.go:495] detecting cgroup driver to use...
	I1028 18:28:40.731465   67149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:40.751312   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:40.766288   67149 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:40.766399   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:40.783995   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:40.800295   67149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:40.940688   67149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:41.101493   67149 docker.go:233] disabling docker service ...
	I1028 18:28:41.101562   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:41.123350   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:41.141744   67149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:41.279020   67149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:41.414748   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:41.429469   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:41.448611   67149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:28:41.448669   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.460766   67149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:41.460842   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.473021   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.485888   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.497498   67149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:41.509250   67149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:41.519701   67149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:41.519754   67149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:41.534596   67149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:41.544814   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:41.681203   67149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:41.786879   67149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:41.786957   67149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:41.791981   67149 start.go:563] Will wait 60s for crictl version
	I1028 18:28:41.792041   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:41.796034   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:41.839867   67149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:41.839958   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.873029   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.904534   67149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:28:38.508232   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.720400   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.784720   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.892007   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:38.892083   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.392953   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.892228   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.912702   66801 api_server.go:72] duration metric: took 1.020696043s to wait for apiserver process to appear ...
	I1028 18:28:39.912728   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:28:39.912749   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:39.913221   66801 api_server.go:269] stopped: https://192.168.61.78:8443/healthz: Get "https://192.168.61.78:8443/healthz": dial tcp 192.168.61.78:8443: connect: connection refused
	I1028 18:28:40.413025   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:40.446984   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Start
	I1028 18:28:40.447191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring networks are active...
	I1028 18:28:40.447998   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network default is active
	I1028 18:28:40.448350   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network mk-default-k8s-diff-port-692033 is active
	I1028 18:28:40.448884   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Getting domain xml...
	I1028 18:28:40.449664   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Creating domain...
	I1028 18:28:41.740010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting to get IP...
	I1028 18:28:41.740827   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741273   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:41.741192   68341 retry.go:31] will retry after 276.06097ms: waiting for machine to come up
	I1028 18:28:42.018700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019135   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019159   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.019089   68341 retry.go:31] will retry after 318.252876ms: waiting for machine to come up
	I1028 18:28:42.338630   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339287   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339312   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.339205   68341 retry.go:31] will retry after 428.196122ms: waiting for machine to come up
	I1028 18:28:42.768656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769225   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769248   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.769134   68341 retry.go:31] will retry after 483.256928ms: waiting for machine to come up
	I1028 18:28:43.253739   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254304   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254353   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.254220   68341 retry.go:31] will retry after 577.932805ms: waiting for machine to come up
	I1028 18:28:43.834355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.834976   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.835021   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.834945   68341 retry.go:31] will retry after 639.531065ms: waiting for machine to come up
	I1028 18:28:44.475727   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476299   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476331   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:44.476248   68341 retry.go:31] will retry after 1.171398436s: waiting for machine to come up
	I1028 18:28:43.473059   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.473096   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.473113   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.588338   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.588371   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.913612   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.918557   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:43.918598   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.412902   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.425930   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.425971   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.913482   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.926092   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.926126   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:45.413673   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:45.419384   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:28:45.430384   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:28:45.430431   66801 api_server.go:131] duration metric: took 5.517694037s to wait for apiserver health ...
	I1028 18:28:45.430442   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:45.430450   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:45.432587   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:28:41.906005   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:41.909278   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909683   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:41.909741   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909996   67149 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:41.915405   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:41.931747   67149 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:41.931886   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:28:41.931944   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:41.987909   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:41.987966   67149 ssh_runner.go:195] Run: which lz4
	I1028 18:28:41.993527   67149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:28:41.998982   67149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:28:41.999014   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:28:43.643480   67149 crio.go:462] duration metric: took 1.649982959s to copy over tarball
	I1028 18:28:43.643559   67149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:28:45.433946   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:28:45.453114   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:28:45.479255   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:28:45.497020   66801 system_pods.go:59] 8 kube-system pods found
	I1028 18:28:45.497072   66801 system_pods.go:61] "coredns-7c65d6cfc9-74b6t" [b6a550da-7c40-4283-b49e-1ab29e652037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:28:45.497084   66801 system_pods.go:61] "etcd-no-preload-051152" [d5b31ded-95ce-4dde-ba88-e653dfdb8d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:28:45.497097   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [95d0acb0-4d58-4307-9f4f-10f920ff4745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:28:45.497105   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [722530e1-1d76-40dc-8a24-fe79d0167835] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:28:45.497112   66801 system_pods.go:61] "kube-proxy-kg42f" [7891354b-a501-45c4-b15c-cf6d29e3721f] Running
	I1028 18:28:45.497121   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [c658808c-79c2-4b8e-b72c-0b2d8e058ab4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:28:45.497130   66801 system_pods.go:61] "metrics-server-6867b74b74-vgd8k" [626b71a2-6904-409f-9274-6963a94e6ac2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:28:45.497137   66801 system_pods.go:61] "storage-provisioner" [39bf84c9-9c6f-4048-8a11-460fb12f622b] Running
	I1028 18:28:45.497146   66801 system_pods.go:74] duration metric: took 17.863894ms to wait for pod list to return data ...
	I1028 18:28:45.497160   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:28:45.501945   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:28:45.501977   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:28:45.501993   66801 node_conditions.go:105] duration metric: took 4.827279ms to run NodePressure ...
	I1028 18:28:45.502014   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:45.835429   66801 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840823   66801 kubeadm.go:739] kubelet initialised
	I1028 18:28:45.840852   66801 kubeadm.go:740] duration metric: took 5.391212ms waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840862   66801 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:28:45.846565   66801 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:45.648994   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649559   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649587   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:45.649512   68341 retry.go:31] will retry after 1.258585317s: waiting for machine to come up
	I1028 18:28:46.909541   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909955   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:46.909911   68341 retry.go:31] will retry after 1.827150306s: waiting for machine to come up
	I1028 18:28:48.738193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738696   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738725   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:48.738653   68341 retry.go:31] will retry after 1.738249889s: waiting for machine to come up
	I1028 18:28:46.758767   67149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115173801s)
	I1028 18:28:46.758810   67149 crio.go:469] duration metric: took 3.115300284s to extract the tarball
	I1028 18:28:46.758821   67149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:28:46.816906   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:46.864347   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:46.864376   67149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:46.864499   67149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.864564   67149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.864623   67149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.864639   67149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.864674   67149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.864686   67149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:28:46.864710   67149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.864529   67149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:46.866383   67149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.866445   67149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.866493   67149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.866579   67149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.866795   67149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.867073   67149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.867095   67149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:28:46.867488   67149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.043358   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.053844   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.055684   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.056812   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.066211   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.090931   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.104900   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:28:47.141214   67149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:28:47.141260   67149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.141307   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202804   67149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:28:47.202863   67149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.202873   67149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:28:47.202903   67149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.202915   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202944   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.234811   67149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:28:47.234853   67149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.234900   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.236717   67149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:28:47.236751   67149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.236798   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.243872   67149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:28:47.243918   67149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.243971   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260210   67149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:28:47.260253   67149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:28:47.260256   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.260293   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260398   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.260438   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.260456   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.260517   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.260559   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413617   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.413776   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.413804   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413825   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.414063   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.414103   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.414150   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.544933   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.581577   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.582079   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.582161   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.582206   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.582344   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.582819   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.662237   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:28:47.736212   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.739757   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:28:47.739928   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:28:47.739802   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:28:47.739812   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:28:47.739841   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:28:47.783578   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:28:49.121698   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:49.266583   67149 cache_images.go:92] duration metric: took 2.402188013s to LoadCachedImages
	W1028 18:28:49.266686   67149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 18:28:49.266702   67149 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:28:49.266828   67149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:49.266918   67149 ssh_runner.go:195] Run: crio config
	I1028 18:28:49.318146   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:28:49.318167   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:49.318176   67149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:49.318193   67149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:28:49.318310   67149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:49.318371   67149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:28:49.329249   67149 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:49.329339   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:49.339379   67149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:28:49.359216   67149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:49.378289   67149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:28:49.397766   67149 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:49.401788   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:49.418204   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:49.558031   67149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:49.575443   67149 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:28:49.575469   67149 certs.go:194] generating shared ca certs ...
	I1028 18:28:49.575489   67149 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:49.575693   67149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:49.575746   67149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:49.575756   67149 certs.go:256] generating profile certs ...
	I1028 18:28:49.575859   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:28:49.575914   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:28:49.575951   67149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:28:49.576058   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:49.576092   67149 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:49.576103   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:49.576131   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:49.576162   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:49.576186   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:49.576238   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:49.576994   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:49.622814   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:49.653690   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:49.678975   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:49.707340   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:28:49.744836   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:28:49.776367   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:49.818999   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:28:49.847531   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:49.871924   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:49.897751   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:49.923267   67149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:49.939805   67149 ssh_runner.go:195] Run: openssl version
	I1028 18:28:49.945611   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:49.956191   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960862   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960916   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.966701   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:49.977882   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:49.990873   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995751   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995810   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:50.001891   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:50.013508   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:50.028132   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034144   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034217   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.041768   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:50.054079   67149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:50.058983   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:50.064802   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:50.070790   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:50.077090   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:50.083149   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:50.089232   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:50.095205   67149 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:50.095338   67149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:50.095411   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.139777   67149 cri.go:89] found id: ""
	I1028 18:28:50.139854   67149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:50.151967   67149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:50.151986   67149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:50.152040   67149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:50.163454   67149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:50.164876   67149 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:28:50.165798   67149 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-223868" cluster setting kubeconfig missing "old-k8s-version-223868" context setting]
	I1028 18:28:50.167121   67149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:50.169545   67149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:50.179447   67149 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.194
	I1028 18:28:50.179477   67149 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:50.179490   67149 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:50.179542   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.213891   67149 cri.go:89] found id: ""
	I1028 18:28:50.213963   67149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:50.231491   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:50.241752   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:50.241775   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:50.241829   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:50.252015   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:50.252075   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:50.263032   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:50.273500   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:50.273564   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:50.283603   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.293521   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:50.293567   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.303701   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:50.316202   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:50.316269   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:50.327841   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:50.341366   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:50.469586   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:49.414188   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:51.855115   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:50.478658   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479208   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479237   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:50.479151   68341 retry.go:31] will retry after 2.362711935s: waiting for machine to come up
	I1028 18:28:52.842907   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843290   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843314   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:52.843250   68341 retry.go:31] will retry after 2.561710525s: waiting for machine to come up
	I1028 18:28:51.507608   67149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037983659s)
	I1028 18:28:51.507645   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.733141   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.842228   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.947336   67149 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:51.947430   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.447618   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.947814   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.448476   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.947571   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.448371   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.947700   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.447735   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.948435   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.857886   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:54.862972   66801 pod_ready.go:93] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:54.863005   66801 pod_ready.go:82] duration metric: took 9.016413449s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:54.863019   66801 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869043   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:55.869076   66801 pod_ready.go:82] duration metric: took 1.006049217s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869091   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874842   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.874865   66801 pod_ready.go:82] duration metric: took 2.005766936s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874875   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878913   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.878930   66801 pod_ready.go:82] duration metric: took 4.049698ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878937   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889897   66801 pod_ready.go:93] pod "kube-proxy-kg42f" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.889913   66801 pod_ready.go:82] duration metric: took 10.971269ms for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889921   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.407934   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408336   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408362   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:55.408274   68341 retry.go:31] will retry after 3.762790995s: waiting for machine to come up
	I1028 18:28:59.173489   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173900   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Found IP for machine: 192.168.39.215
	I1028 18:28:59.173923   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has current primary IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173929   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserving static IP address...
	I1028 18:28:59.174320   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.174343   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | skip adding static IP to network mk-default-k8s-diff-port-692033 - found existing host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"}
	I1028 18:28:59.174355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserved static IP address: 192.168.39.215
	I1028 18:28:59.174365   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for SSH to be available...
	I1028 18:28:59.174376   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Getting to WaitForSSH function...
	I1028 18:28:59.176441   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176755   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.176786   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176913   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH client type: external
	I1028 18:28:59.176936   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa (-rw-------)
	I1028 18:28:59.176958   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:59.176970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | About to run SSH command:
	I1028 18:28:59.176982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | exit 0
	I1028 18:28:59.300272   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:59.300649   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetConfigRaw
	I1028 18:28:59.301261   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.303505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.303832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.303857   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.304080   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:28:59.304287   67489 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:59.304310   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:59.304535   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.306713   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307008   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.307042   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307187   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.307348   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307627   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.307768   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.307936   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.307946   67489 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:59.412710   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:59.412743   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413009   67489 buildroot.go:166] provisioning hostname "default-k8s-diff-port-692033"
	I1028 18:28:59.413041   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.415772   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416048   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.416070   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416251   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.416437   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416728   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.416847   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.417030   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.417041   67489 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-692033 && echo "default-k8s-diff-port-692033" | sudo tee /etc/hostname
	I1028 18:28:59.538491   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-692033
	
	I1028 18:28:59.538518   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.540842   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541144   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.541173   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.541527   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541684   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541815   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.541964   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.542123   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.542138   67489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-692033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-692033/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-692033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:59.657448   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:59.657480   67489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:59.657524   67489 buildroot.go:174] setting up certificates
	I1028 18:28:59.657539   67489 provision.go:84] configureAuth start
	I1028 18:28:59.657556   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.657832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.660465   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660797   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.660840   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660949   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.663393   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663801   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.663830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663977   67489 provision.go:143] copyHostCerts
	I1028 18:28:59.664049   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:59.664062   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:59.664117   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:59.664217   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:59.664228   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:59.664250   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:59.664300   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:59.664308   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:59.664327   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:59.664403   67489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-692033 san=[127.0.0.1 192.168.39.215 default-k8s-diff-port-692033 localhost minikube]
	I1028 18:28:59.882619   67489 provision.go:177] copyRemoteCerts
	I1028 18:28:59.882672   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:59.882695   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.885303   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.885686   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885927   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.886121   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.886278   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.886382   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:28:59.975231   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:00.000412   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 18:29:00.024424   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:29:00.048646   67489 provision.go:87] duration metric: took 391.090444ms to configureAuth
	I1028 18:29:00.048674   67489 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:00.048884   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:00.048970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.051793   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052156   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.052185   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.052532   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052729   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052894   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.053080   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.053241   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.053254   67489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:00.525285   66600 start.go:364] duration metric: took 54.917560334s to acquireMachinesLock for "embed-certs-021370"
	I1028 18:29:00.525349   66600 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:29:00.525359   66600 fix.go:54] fixHost starting: 
	I1028 18:29:00.525740   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:29:00.525778   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:29:00.544614   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I1028 18:29:00.544976   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:29:00.545433   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:29:00.545455   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:29:00.545842   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:29:00.546046   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:00.546230   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:29:00.547770   66600 fix.go:112] recreateIfNeeded on embed-certs-021370: state=Stopped err=<nil>
	I1028 18:29:00.547794   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	W1028 18:29:00.547957   66600 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:29:00.549753   66600 out.go:177] * Restarting existing kvm2 VM for "embed-certs-021370" ...
	I1028 18:28:56.447531   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.947711   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.447782   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.947642   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.948256   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.447558   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.948018   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.448186   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.947565   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.280618   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:00.280641   67489 machine.go:96] duration metric: took 976.341252ms to provisionDockerMachine
	I1028 18:29:00.280653   67489 start.go:293] postStartSetup for "default-k8s-diff-port-692033" (driver="kvm2")
	I1028 18:29:00.280669   67489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:00.280690   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.281004   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:00.281044   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.283656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.283977   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.284010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.284170   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.284382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.284549   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.284692   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.372947   67489 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:00.377456   67489 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:00.377480   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:00.377547   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:00.377646   67489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:00.377762   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:00.388767   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:00.413520   67489 start.go:296] duration metric: took 132.852709ms for postStartSetup
	I1028 18:29:00.413557   67489 fix.go:56] duration metric: took 19.992127182s for fixHost
	I1028 18:29:00.413578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.416040   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416377   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.416405   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416553   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.416756   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.416930   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.417065   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.417228   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.417412   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.417424   67489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:00.525082   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140140.492840769
	
	I1028 18:29:00.525105   67489 fix.go:216] guest clock: 1730140140.492840769
	I1028 18:29:00.525114   67489 fix.go:229] Guest: 2024-10-28 18:29:00.492840769 +0000 UTC Remote: 2024-10-28 18:29:00.413561948 +0000 UTC m=+205.301669628 (delta=79.278821ms)
	I1028 18:29:00.525169   67489 fix.go:200] guest clock delta is within tolerance: 79.278821ms
	I1028 18:29:00.525180   67489 start.go:83] releasing machines lock for "default-k8s-diff-port-692033", held for 20.103791447s
	I1028 18:29:00.525214   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.525495   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:00.528023   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528385   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.528415   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529038   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529287   67489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:00.529323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.529380   67489 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:00.529403   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.531822   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532022   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532163   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532294   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532443   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532481   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532488   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532612   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532680   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.532830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532830   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.532965   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.533103   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.609362   67489 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:00.636444   67489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:00.785916   67489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:00.792198   67489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:00.792279   67489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:00.812095   67489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:00.812124   67489 start.go:495] detecting cgroup driver to use...
	I1028 18:29:00.812190   67489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:00.829536   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:00.844021   67489 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:00.844090   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:00.858561   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:00.873128   67489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:00.990494   67489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:01.148650   67489 docker.go:233] disabling docker service ...
	I1028 18:29:01.148729   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:01.162487   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:01.177407   67489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:01.303665   67489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:01.430019   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:01.443822   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:01.462768   67489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:01.462830   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.473669   67489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:01.473737   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.484364   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.496220   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.507216   67489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:01.518848   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.534216   67489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.554294   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.565095   67489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:01.574547   67489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:01.574614   67489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:01.596531   67489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:01.606858   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:01.740272   67489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:01.844969   67489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:01.845053   67489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:01.850004   67489 start.go:563] Will wait 60s for crictl version
	I1028 18:29:01.850056   67489 ssh_runner.go:195] Run: which crictl
	I1028 18:29:01.854032   67489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:01.893281   67489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:01.893367   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.923557   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.956282   67489 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:00.551001   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Start
	I1028 18:29:00.551172   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring networks are active...
	I1028 18:29:00.551820   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network default is active
	I1028 18:29:00.552130   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network mk-embed-certs-021370 is active
	I1028 18:29:00.552482   66600 main.go:141] libmachine: (embed-certs-021370) Getting domain xml...
	I1028 18:29:00.553186   66600 main.go:141] libmachine: (embed-certs-021370) Creating domain...
	I1028 18:29:01.830016   66600 main.go:141] libmachine: (embed-certs-021370) Waiting to get IP...
	I1028 18:29:01.831046   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:01.831522   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:01.831630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:01.831518   68528 retry.go:31] will retry after 300.306268ms: waiting for machine to come up
	I1028 18:29:02.132901   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.133350   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.133383   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.133293   68528 retry.go:31] will retry after 383.232008ms: waiting for machine to come up
	I1028 18:29:02.518736   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.519274   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.519299   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.519241   68528 retry.go:31] will retry after 354.591942ms: waiting for machine to come up
	I1028 18:29:02.875813   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.876360   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.876397   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.876325   68528 retry.go:31] will retry after 529.444037ms: waiting for machine to come up
	I1028 18:28:58.895888   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:58.895918   66801 pod_ready.go:82] duration metric: took 1.005990705s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:58.895932   66801 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:00.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:02.903390   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:01.957748   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:01.960967   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:01.961382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961635   67489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:01.966300   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:01.979786   67489 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:01.979899   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:01.979957   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:02.020659   67489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:02.020716   67489 ssh_runner.go:195] Run: which lz4
	I1028 18:29:02.024772   67489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:02.030183   67489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:02.030206   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:03.449423   67489 crio.go:462] duration metric: took 1.424673911s to copy over tarball
	I1028 18:29:03.449498   67489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:01.447557   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.947946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.448522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.947533   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.447522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.948025   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.448136   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.948157   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.447635   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.947987   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.407835   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:03.408366   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:03.408390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:03.408265   68528 retry.go:31] will retry after 680.005296ms: waiting for machine to come up
	I1028 18:29:04.089802   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.090390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.090409   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.090338   68528 retry.go:31] will retry after 833.681725ms: waiting for machine to come up
	I1028 18:29:04.925788   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.926278   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.926298   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.926227   68528 retry.go:31] will retry after 1.050194845s: waiting for machine to come up
	I1028 18:29:05.978270   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:05.978715   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:05.978742   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:05.978669   68528 retry.go:31] will retry after 1.416773018s: waiting for machine to come up
	I1028 18:29:07.397367   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:07.397843   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:07.397876   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:07.397787   68528 retry.go:31] will retry after 1.621623459s: waiting for machine to come up
	I1028 18:29:04.903465   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:06.903931   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:05.622217   67489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172685001s)
	I1028 18:29:05.622253   67489 crio.go:469] duration metric: took 2.172801769s to extract the tarball
	I1028 18:29:05.622264   67489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:05.660585   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:05.705484   67489 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:05.705510   67489 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:05.705520   67489 kubeadm.go:934] updating node { 192.168.39.215 8444 v1.31.2 crio true true} ...
	I1028 18:29:05.705634   67489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-692033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:05.705725   67489 ssh_runner.go:195] Run: crio config
	I1028 18:29:05.760618   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:05.760649   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:05.760661   67489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:05.760690   67489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-692033 NodeName:default-k8s-diff-port-692033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:05.760858   67489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-692033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:05.760936   67489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:05.771392   67489 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:05.771464   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:05.780926   67489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1028 18:29:05.797951   67489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:05.814159   67489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1028 18:29:05.830723   67489 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:05.835163   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:05.847192   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:05.972201   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:05.990475   67489 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033 for IP: 192.168.39.215
	I1028 18:29:05.990492   67489 certs.go:194] generating shared ca certs ...
	I1028 18:29:05.990511   67489 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:05.990711   67489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:05.990764   67489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:05.990776   67489 certs.go:256] generating profile certs ...
	I1028 18:29:05.990875   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.key
	I1028 18:29:05.990991   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key.81b9981a
	I1028 18:29:05.991052   67489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key
	I1028 18:29:05.991218   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:05.991268   67489 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:05.991283   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:05.991317   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:05.991359   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:05.991405   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:05.991481   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:05.992294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:06.033938   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:06.070407   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:06.115934   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:06.144600   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 18:29:06.169202   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:06.196294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:06.219384   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:29:06.242169   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:06.266506   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:06.290175   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:06.313006   67489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:06.329076   67489 ssh_runner.go:195] Run: openssl version
	I1028 18:29:06.335322   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:06.346021   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350401   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350464   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.356134   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:06.366765   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:06.377486   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381920   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381978   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.387492   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:06.398392   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:06.413238   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418376   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418429   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.423997   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:06.436170   67489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:06.440853   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:06.446851   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:06.452980   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:06.458973   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:06.465088   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:06.470776   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:06.476462   67489 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:06.476588   67489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:06.476638   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.519820   67489 cri.go:89] found id: ""
	I1028 18:29:06.519884   67489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:06.530091   67489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:06.530110   67489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:06.530171   67489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:06.539807   67489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:06.540946   67489 kubeconfig.go:125] found "default-k8s-diff-port-692033" server: "https://192.168.39.215:8444"
	I1028 18:29:06.543088   67489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:06.552354   67489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I1028 18:29:06.552379   67489 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:06.552389   67489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:06.552445   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.586545   67489 cri.go:89] found id: ""
	I1028 18:29:06.586611   67489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:06.603418   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:06.612856   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:06.612876   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:06.612921   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:29:06.621852   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:06.621900   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:06.631132   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:29:06.640088   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:06.640158   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:06.651007   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.660034   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:06.660104   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.669587   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:29:06.678863   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:06.678937   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:06.688820   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:06.698470   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:06.820432   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.030810   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210339958s)
	I1028 18:29:08.030839   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.255000   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.321500   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.412775   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:08.412854   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.913648   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.413011   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.459009   67489 api_server.go:72] duration metric: took 1.046232596s to wait for apiserver process to appear ...
	I1028 18:29:09.459041   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:09.459062   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:09.459626   67489 api_server.go:269] stopped: https://192.168.39.215:8444/healthz: Get "https://192.168.39.215:8444/healthz": dial tcp 192.168.39.215:8444: connect: connection refused
	I1028 18:29:09.960128   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:06.447581   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.947550   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.447977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.947491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.447960   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.947662   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.448201   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.947753   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.448116   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.948175   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.020419   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:09.020867   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:09.020899   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:09.020814   68528 retry.go:31] will retry after 2.2230034s: waiting for machine to come up
	I1028 18:29:11.245136   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:11.245630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:11.245657   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:11.245595   68528 retry.go:31] will retry after 2.153898764s: waiting for machine to come up
	I1028 18:29:09.403596   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:11.903702   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:12.135346   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.135381   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.135394   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.166207   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.166234   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.459631   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.473153   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.473183   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:12.959778   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.969281   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.969320   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:13.459913   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:13.464362   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:29:13.471925   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:13.471953   67489 api_server.go:131] duration metric: took 4.012904227s to wait for apiserver health ...
	I1028 18:29:13.471964   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:13.471971   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:13.473908   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:13.475283   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:13.487393   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:13.532627   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:13.544945   67489 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:13.544982   67489 system_pods.go:61] "coredns-7c65d6cfc9-ctx9z" [7067f349-3a22-468d-bd9d-19d057eb43f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:13.544993   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [313161ff-f30f-4e25-978d-9aa2eba7fc44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:13.545004   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [e9a66e8e-946b-4365-bd63-3adfdd75e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:13.545014   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [0e682f68-2f9a-4bf3-bbe4-3a6b1ef6778d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:13.545021   67489 system_pods.go:61] "kube-proxy-86rll" [d34f46c6-3227-40c9-ac97-066b98bfce32] Running
	I1028 18:29:13.545029   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [b9058969-31e2-4249-862f-ef5de7784adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:13.545043   67489 system_pods.go:61] "metrics-server-6867b74b74-dz4nl" [833c650e-5f5d-46a1-9ae1-64619c53a92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:13.545047   67489 system_pods.go:61] "storage-provisioner" [342db8fa-7873-47b0-a5a6-52cde2e19d47] Running
	I1028 18:29:13.545053   67489 system_pods.go:74] duration metric: took 12.403166ms to wait for pod list to return data ...
	I1028 18:29:13.545060   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:13.548591   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:13.548619   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:13.548632   67489 node_conditions.go:105] duration metric: took 3.567222ms to run NodePressure ...
	I1028 18:29:13.548649   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:13.818718   67489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826139   67489 kubeadm.go:739] kubelet initialised
	I1028 18:29:13.826161   67489 kubeadm.go:740] duration metric: took 7.415257ms waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826170   67489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:13.833418   67489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.838793   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838820   67489 pod_ready.go:82] duration metric: took 5.377698ms for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.838831   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838840   67489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.843172   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843195   67489 pod_ready.go:82] duration metric: took 4.34633ms for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.843203   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843209   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.847581   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847615   67489 pod_ready.go:82] duration metric: took 4.389898ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.847630   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847642   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:11.448521   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.947592   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.448427   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.948413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.448390   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.948518   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.447929   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.948106   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.948236   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.401547   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:13.402054   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:13.402083   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:13.402028   68528 retry.go:31] will retry after 2.345507901s: waiting for machine to come up
	I1028 18:29:15.749122   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:15.749485   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:15.749502   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:15.749451   68528 retry.go:31] will retry after 2.974576274s: waiting for machine to come up
	I1028 18:29:13.903930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.403934   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:15.858338   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:18.354245   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.447535   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.948117   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.448197   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.948491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.948393   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.448406   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.947788   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.448100   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.947907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.727508   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.727990   66600 main.go:141] libmachine: (embed-certs-021370) Found IP for machine: 192.168.50.62
	I1028 18:29:18.728011   66600 main.go:141] libmachine: (embed-certs-021370) Reserving static IP address...
	I1028 18:29:18.728028   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has current primary IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.728447   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.728478   66600 main.go:141] libmachine: (embed-certs-021370) Reserved static IP address: 192.168.50.62
	I1028 18:29:18.728497   66600 main.go:141] libmachine: (embed-certs-021370) DBG | skip adding static IP to network mk-embed-certs-021370 - found existing host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"}
	I1028 18:29:18.728510   66600 main.go:141] libmachine: (embed-certs-021370) Waiting for SSH to be available...
	I1028 18:29:18.728520   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Getting to WaitForSSH function...
	I1028 18:29:18.730574   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731031   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.731069   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731227   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH client type: external
	I1028 18:29:18.731248   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa (-rw-------)
	I1028 18:29:18.731282   66600 main.go:141] libmachine: (embed-certs-021370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:29:18.731310   66600 main.go:141] libmachine: (embed-certs-021370) DBG | About to run SSH command:
	I1028 18:29:18.731327   66600 main.go:141] libmachine: (embed-certs-021370) DBG | exit 0
	I1028 18:29:18.860213   66600 main.go:141] libmachine: (embed-certs-021370) DBG | SSH cmd err, output: <nil>: 
	I1028 18:29:18.860619   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetConfigRaw
	I1028 18:29:18.861235   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:18.863576   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.863932   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.863956   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.864224   66600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/config.json ...
	I1028 18:29:18.864465   66600 machine.go:93] provisionDockerMachine start ...
	I1028 18:29:18.864521   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:18.864720   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.866951   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867314   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.867349   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867511   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.867665   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867811   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867941   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.868072   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.868230   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.868239   66600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:29:18.972695   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:29:18.972729   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.972970   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:29:18.973000   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.973209   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.975608   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.975889   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.975915   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.976082   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.976269   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976401   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976505   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.976625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.976796   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.976809   66600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-021370 && echo "embed-certs-021370" | sudo tee /etc/hostname
	I1028 18:29:19.094622   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-021370
	
	I1028 18:29:19.094655   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.097110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097436   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.097460   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097639   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.097817   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.097967   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.098121   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.098309   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.098517   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.098533   66600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-021370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-021370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-021370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:29:19.218088   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:29:19.218112   66600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:29:19.218140   66600 buildroot.go:174] setting up certificates
	I1028 18:29:19.218150   66600 provision.go:84] configureAuth start
	I1028 18:29:19.218159   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:19.218411   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:19.221093   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221441   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.221469   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221641   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.223628   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.223908   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.223928   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.224085   66600 provision.go:143] copyHostCerts
	I1028 18:29:19.224155   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:29:19.224185   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:29:19.224252   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:29:19.224380   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:29:19.224390   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:29:19.224422   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:29:19.224532   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:29:19.224542   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:29:19.224570   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:29:19.224655   66600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-021370 san=[127.0.0.1 192.168.50.62 embed-certs-021370 localhost minikube]
	I1028 18:29:19.402860   66600 provision.go:177] copyRemoteCerts
	I1028 18:29:19.402925   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:29:19.402954   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.405556   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.405904   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.405939   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.406100   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.406265   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.406391   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.406494   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.486543   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:19.510790   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:29:19.534037   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:29:19.557509   66600 provision.go:87] duration metric: took 339.349044ms to configureAuth
	I1028 18:29:19.557531   66600 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:19.557681   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:19.557745   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.560240   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560594   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.560623   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560757   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.560931   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561110   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561320   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.561490   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.561651   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.561664   66600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:19.781270   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:19.781304   66600 machine.go:96] duration metric: took 916.814114ms to provisionDockerMachine
	I1028 18:29:19.781317   66600 start.go:293] postStartSetup for "embed-certs-021370" (driver="kvm2")
	I1028 18:29:19.781327   66600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:19.781345   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:19.781664   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:19.781690   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.784176   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784509   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.784538   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784667   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.784854   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.785028   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.785171   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.867396   66600 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:19.871516   66600 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:19.871542   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:19.871630   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:19.871717   66600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:19.871799   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:19.882017   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:19.906531   66600 start.go:296] duration metric: took 125.203636ms for postStartSetup
	I1028 18:29:19.906562   66600 fix.go:56] duration metric: took 19.381205641s for fixHost
	I1028 18:29:19.906581   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.909285   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909610   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.909640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909778   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.909980   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910311   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910444   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.910625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.910788   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.910803   66600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:20.017311   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140159.989127147
	
	I1028 18:29:20.017339   66600 fix.go:216] guest clock: 1730140159.989127147
	I1028 18:29:20.017346   66600 fix.go:229] Guest: 2024-10-28 18:29:19.989127147 +0000 UTC Remote: 2024-10-28 18:29:19.906566181 +0000 UTC m=+356.890524496 (delta=82.560966ms)
	I1028 18:29:20.017368   66600 fix.go:200] guest clock delta is within tolerance: 82.560966ms
	I1028 18:29:20.017374   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 19.492049852s
	I1028 18:29:20.017396   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.017657   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:20.020286   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020680   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.020704   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020816   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021307   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021491   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021577   66600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:20.021616   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.021746   66600 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:20.021767   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.024157   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024429   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024511   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024533   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024679   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.024856   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.024880   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024896   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.025019   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025070   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.025160   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.025201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.025304   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025443   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.101316   66600 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:20.124859   66600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:20.268773   66600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:20.275277   66600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:20.275358   66600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:20.291972   66600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:20.291999   66600 start.go:495] detecting cgroup driver to use...
	I1028 18:29:20.292066   66600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:20.311389   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:20.325385   66600 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:20.325434   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:20.339246   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:20.353759   66600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:20.477639   66600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:20.622752   66600 docker.go:233] disabling docker service ...
	I1028 18:29:20.622825   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:20.637258   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:20.650210   66600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:20.801036   66600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:20.945078   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:20.959494   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:20.977797   66600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:20.977854   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.987991   66600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:20.988038   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.998188   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.008502   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.018540   66600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:21.028663   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.038758   66600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.056298   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.067136   66600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:21.076859   66600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:21.076906   66600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:21.090468   66600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:21.099951   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:21.226675   66600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:21.321993   66600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:21.322074   66600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:21.327981   66600 start.go:563] Will wait 60s for crictl version
	I1028 18:29:21.328028   66600 ssh_runner.go:195] Run: which crictl
	I1028 18:29:21.331673   66600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:21.369066   66600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:21.369168   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.396873   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.426233   66600 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:21.427570   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:21.430207   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430560   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:21.430582   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430732   66600 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:21.435293   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:21.447885   66600 kubeadm.go:883] updating cluster {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:21.447989   66600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:21.448067   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:21.488401   66600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:21.488488   66600 ssh_runner.go:195] Run: which lz4
	I1028 18:29:21.492578   66600 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:21.496531   66600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:21.496560   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:22.824198   66600 crio.go:462] duration metric: took 1.331643546s to copy over tarball
	I1028 18:29:22.824276   66600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:18.902233   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.902721   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.904121   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.354850   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.355961   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:24.854445   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:21.447903   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.948305   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.448529   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.947708   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.447881   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.947572   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.448433   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.948299   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.447748   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.947863   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.906928   66600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082617931s)
	I1028 18:29:24.906959   66600 crio.go:469] duration metric: took 2.082732511s to extract the tarball
	I1028 18:29:24.906968   66600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:24.944094   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:24.991024   66600 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:24.991048   66600 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:24.991057   66600 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.31.2 crio true true} ...
	I1028 18:29:24.991175   66600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-021370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:24.991262   66600 ssh_runner.go:195] Run: crio config
	I1028 18:29:25.034609   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:25.034629   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:25.034639   66600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:25.034657   66600 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-021370 NodeName:embed-certs-021370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:25.034803   66600 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-021370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:25.034858   66600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:25.044587   66600 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:25.044661   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:25.054150   66600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 18:29:25.070100   66600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:25.085866   66600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1028 18:29:25.101932   66600 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:25.105817   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:25.117399   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:25.235698   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:25.251517   66600 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370 for IP: 192.168.50.62
	I1028 18:29:25.251536   66600 certs.go:194] generating shared ca certs ...
	I1028 18:29:25.251549   66600 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:25.251701   66600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:25.251758   66600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:25.251771   66600 certs.go:256] generating profile certs ...
	I1028 18:29:25.251871   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/client.key
	I1028 18:29:25.251951   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key.1a2ee1e7
	I1028 18:29:25.252010   66600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key
	I1028 18:29:25.252184   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:25.252213   66600 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:25.252222   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:25.252246   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:25.252271   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:25.252291   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:25.252328   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:25.252968   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:25.280370   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:25.323757   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:25.356813   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:25.395729   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 18:29:25.428768   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:25.459929   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:25.485206   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:29:25.514312   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:25.537007   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:25.559926   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:25.582419   66600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:25.599284   66600 ssh_runner.go:195] Run: openssl version
	I1028 18:29:25.605132   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:25.615576   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619856   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619911   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.625516   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:25.636185   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:25.646664   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650958   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650998   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.657176   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:25.668490   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:25.679608   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.683993   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.684041   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.689729   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:25.700817   66600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:25.705214   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:25.711351   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:25.717172   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:25.722879   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:25.728415   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:25.733859   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:25.739422   66600 kubeadm.go:392] StartCluster: {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:25.739492   66600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:25.739534   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.779869   66600 cri.go:89] found id: ""
	I1028 18:29:25.779926   66600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:25.790753   66600 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:25.790771   66600 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:25.790811   66600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:25.800588   66600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:25.801624   66600 kubeconfig.go:125] found "embed-certs-021370" server: "https://192.168.50.62:8443"
	I1028 18:29:25.803466   66600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:25.813212   66600 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I1028 18:29:25.813240   66600 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:25.813254   66600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:25.813312   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.848911   66600 cri.go:89] found id: ""
	I1028 18:29:25.848976   66600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:25.866165   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:25.876454   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:25.876485   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:25.876539   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:29:25.886746   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:25.886802   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:25.897486   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:29:25.907828   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:25.907881   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:25.917520   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.926896   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:25.926950   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.937184   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:29:25.946539   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:25.946585   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:25.956520   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:25.968541   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:26.077716   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.298743   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.220990469s)
	I1028 18:29:27.298777   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.517286   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.582890   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.648091   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:27.648159   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.402969   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:27.405049   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.356621   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.356642   67489 pod_ready.go:82] duration metric: took 12.508989427s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.356653   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361609   67489 pod_ready.go:93] pod "kube-proxy-86rll" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.361627   67489 pod_ready.go:82] duration metric: took 4.968039ms for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361635   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365430   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.365449   67489 pod_ready.go:82] duration metric: took 3.807327ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365460   67489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:28.373442   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.448386   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.948082   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.447496   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.948285   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.947683   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.447813   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.947810   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.448413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.947477   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.148668   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.648320   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.148392   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.648218   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.682858   66600 api_server.go:72] duration metric: took 2.034774456s to wait for apiserver process to appear ...
	I1028 18:29:29.682888   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:29.682915   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:29.683457   66600 api_server.go:269] stopped: https://192.168.50.62:8443/healthz: Get "https://192.168.50.62:8443/healthz": dial tcp 192.168.50.62:8443: connect: connection refused
	I1028 18:29:30.182997   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.878280   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.878304   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:32.878318   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.942789   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.942828   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:29.903158   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:32.404024   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.183344   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.187337   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.187362   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:33.683288   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.687653   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.687680   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:34.183190   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:34.187671   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:29:34.195909   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:34.195938   66600 api_server.go:131] duration metric: took 4.51303648s to wait for apiserver health ...
	I1028 18:29:34.195950   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:34.195959   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:34.197469   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:30.872450   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.372710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:31.448099   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.948269   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.447660   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.947559   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.447716   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.948569   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.447555   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.947612   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.448411   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.947786   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.198803   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:34.221645   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:34.250694   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:34.261167   66600 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:34.261211   66600 system_pods.go:61] "coredns-7c65d6cfc9-bdtd8" [e1fff57c-ba57-4592-9049-7cc80a6f67a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:34.261229   66600 system_pods.go:61] "etcd-embed-certs-021370" [0c805e30-b6d8-416c-97af-c33b142b46e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:34.261240   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [244e08f7-7e8c-4547-b145-9816374fe582] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:34.261251   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [c08dc68e-d441-4d96-8377-957c381c4ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:34.261265   66600 system_pods.go:61] "kube-proxy-7g7lr" [828a4297-7703-46a7-bffe-c8daf83ef4bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:29:34.261277   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [2bc3fea6-0f01-43e9-b69e-deb26980e658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:34.261286   66600 system_pods.go:61] "metrics-server-6867b74b74-gg8bl" [599d8cf3-717d-46b2-a5ba-43e00f46829b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:34.261296   66600 system_pods.go:61] "storage-provisioner" [ad047e20-2de9-447c-83bc-8b835292a25f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:29:34.261307   66600 system_pods.go:74] duration metric: took 10.589505ms to wait for pod list to return data ...
	I1028 18:29:34.261319   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:34.265041   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:34.265066   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:34.265079   66600 node_conditions.go:105] duration metric: took 3.75485ms to run NodePressure ...
	I1028 18:29:34.265098   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:34.567509   66600 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571573   66600 kubeadm.go:739] kubelet initialised
	I1028 18:29:34.571592   66600 kubeadm.go:740] duration metric: took 4.056877ms waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571599   66600 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:34.576872   66600 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:36.586357   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:34.901383   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.902526   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:35.871154   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:37.873138   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.447566   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.947886   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.448276   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.948547   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.447546   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.947974   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.448334   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.948183   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.448396   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.947620   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.083269   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.083414   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:41.083443   66600 pod_ready.go:82] duration metric: took 6.506548177s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:41.083453   66600 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:39.401480   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.402426   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:40.370529   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:42.371580   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:44.372259   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.448306   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.947486   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.448219   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.948295   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.447765   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.947468   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.448454   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.947488   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.447568   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.948070   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.089927   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.589484   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.594775   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:43.403246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.403595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.902160   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.872441   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.371650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.448123   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.948178   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.447989   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.947888   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.448230   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.947692   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.448090   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.947996   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.447949   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.947977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.089584   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.089607   66600 pod_ready.go:82] duration metric: took 7.006147079s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.089619   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093940   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.093959   66600 pod_ready.go:82] duration metric: took 4.332474ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093969   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098279   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.098295   66600 pod_ready.go:82] duration metric: took 4.319206ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098304   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102326   66600 pod_ready.go:93] pod "kube-proxy-7g7lr" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.102341   66600 pod_ready.go:82] duration metric: took 4.03162ms for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102349   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106249   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.106265   66600 pod_ready.go:82] duration metric: took 3.910208ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106279   66600 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:50.112678   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:52.113794   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.902296   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.902424   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.371741   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:53.371833   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.448130   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.948450   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:51.948545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:51.987428   67149 cri.go:89] found id: ""
	I1028 18:29:51.987459   67149 logs.go:282] 0 containers: []
	W1028 18:29:51.987470   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:51.987478   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:51.987534   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:52.021429   67149 cri.go:89] found id: ""
	I1028 18:29:52.021452   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.021460   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:52.021466   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:52.021509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:52.055338   67149 cri.go:89] found id: ""
	I1028 18:29:52.055362   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.055373   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:52.055380   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:52.055432   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:52.088673   67149 cri.go:89] found id: ""
	I1028 18:29:52.088697   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.088705   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:52.088711   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:52.088766   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:52.129833   67149 cri.go:89] found id: ""
	I1028 18:29:52.129854   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.129862   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:52.129867   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:52.129918   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:52.162994   67149 cri.go:89] found id: ""
	I1028 18:29:52.163029   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.163040   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:52.163047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:52.163105   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:52.196819   67149 cri.go:89] found id: ""
	I1028 18:29:52.196840   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.196848   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:52.196853   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:52.196906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:52.232924   67149 cri.go:89] found id: ""
	I1028 18:29:52.232955   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.232965   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:52.232977   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:52.232992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:52.283317   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:52.283353   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:52.296648   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:52.296673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:52.423396   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:52.423418   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:52.423429   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:52.497671   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:52.497704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:55.037920   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:55.052539   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:55.052602   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:55.089302   67149 cri.go:89] found id: ""
	I1028 18:29:55.089332   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.089343   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:55.089351   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:55.089404   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:55.127317   67149 cri.go:89] found id: ""
	I1028 18:29:55.127345   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.127352   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:55.127358   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:55.127413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:55.161689   67149 cri.go:89] found id: ""
	I1028 18:29:55.161714   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.161721   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:55.161727   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:55.161772   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:55.196494   67149 cri.go:89] found id: ""
	I1028 18:29:55.196521   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.196534   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:55.196542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:55.196596   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:55.234980   67149 cri.go:89] found id: ""
	I1028 18:29:55.235008   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.235020   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:55.235028   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:55.235086   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:55.274750   67149 cri.go:89] found id: ""
	I1028 18:29:55.274775   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.274783   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:55.274789   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:55.274842   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:55.309839   67149 cri.go:89] found id: ""
	I1028 18:29:55.309865   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.309874   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:55.309881   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:55.309943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:55.358765   67149 cri.go:89] found id: ""
	I1028 18:29:55.358793   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.358805   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:55.358816   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:55.358830   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:55.422821   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:55.422869   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:55.439458   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:55.439482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:55.507743   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:55.507764   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:55.507775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:55.582679   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:55.582710   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:54.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.612967   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:54.402722   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.902816   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:55.372539   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:57.871444   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:58.124907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:58.139125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:58.139181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:58.178829   67149 cri.go:89] found id: ""
	I1028 18:29:58.178853   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.178864   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:58.178871   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:58.178933   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:58.212290   67149 cri.go:89] found id: ""
	I1028 18:29:58.212320   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.212336   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:58.212344   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:58.212402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:58.246108   67149 cri.go:89] found id: ""
	I1028 18:29:58.246135   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.246145   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:58.246152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:58.246212   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:58.280625   67149 cri.go:89] found id: ""
	I1028 18:29:58.280651   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.280662   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:58.280670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:58.280727   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:58.318755   67149 cri.go:89] found id: ""
	I1028 18:29:58.318783   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.318793   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:58.318801   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:58.318853   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:58.356452   67149 cri.go:89] found id: ""
	I1028 18:29:58.356487   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.356499   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:58.356506   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:58.356564   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:58.389906   67149 cri.go:89] found id: ""
	I1028 18:29:58.389928   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.389936   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:58.389943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:58.390001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:58.425883   67149 cri.go:89] found id: ""
	I1028 18:29:58.425911   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.425920   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:58.425929   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:58.425943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:58.484392   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:58.484433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:58.498133   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:58.498159   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:58.572358   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:58.572382   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:58.572397   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:58.654963   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:58.654997   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:58.613408   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.614235   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:59.402355   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.403000   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.370479   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:02.370951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:04.372159   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.196593   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:01.209622   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:01.209693   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:01.243682   67149 cri.go:89] found id: ""
	I1028 18:30:01.243708   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.243718   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:01.243726   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:01.243786   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:01.277617   67149 cri.go:89] found id: ""
	I1028 18:30:01.277646   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.277654   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:01.277660   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:01.277710   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:01.314028   67149 cri.go:89] found id: ""
	I1028 18:30:01.314055   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.314067   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:01.314081   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:01.314152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:01.350324   67149 cri.go:89] found id: ""
	I1028 18:30:01.350348   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.350356   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:01.350362   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:01.350415   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:01.385802   67149 cri.go:89] found id: ""
	I1028 18:30:01.385826   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.385834   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:01.385840   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:01.385883   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:01.421507   67149 cri.go:89] found id: ""
	I1028 18:30:01.421534   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.421545   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:01.421553   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:01.421611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:01.457285   67149 cri.go:89] found id: ""
	I1028 18:30:01.457314   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.457326   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:01.457333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:01.457380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:01.490962   67149 cri.go:89] found id: ""
	I1028 18:30:01.490984   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.490992   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:01.491000   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:01.491012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:01.559906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:01.559937   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:01.559962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:01.639455   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:01.639485   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.681968   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:01.681994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:01.736639   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:01.736672   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.251876   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:04.265639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:04.265711   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:04.300133   67149 cri.go:89] found id: ""
	I1028 18:30:04.300159   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.300167   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:04.300173   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:04.300228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:04.335723   67149 cri.go:89] found id: ""
	I1028 18:30:04.335749   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.335760   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:04.335767   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:04.335825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:04.373009   67149 cri.go:89] found id: ""
	I1028 18:30:04.373030   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.373040   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:04.373048   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:04.373113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:04.405969   67149 cri.go:89] found id: ""
	I1028 18:30:04.405993   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.406003   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:04.406011   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:04.406066   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:04.441067   67149 cri.go:89] found id: ""
	I1028 18:30:04.441095   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.441106   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:04.441112   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:04.441176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:04.475231   67149 cri.go:89] found id: ""
	I1028 18:30:04.475260   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.475270   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:04.475277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:04.475342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:04.512970   67149 cri.go:89] found id: ""
	I1028 18:30:04.512998   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.513009   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:04.513017   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:04.513078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:04.547857   67149 cri.go:89] found id: ""
	I1028 18:30:04.547880   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.547890   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:04.547901   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:04.547913   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:04.598870   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:04.598900   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.612678   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:04.612705   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:04.686945   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:04.686967   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:04.686979   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:04.764943   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:04.764992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:03.113309   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.113449   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.613568   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:03.902735   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.903116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:06.872012   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:09.371576   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.310905   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:07.323880   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:07.323946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:07.363597   67149 cri.go:89] found id: ""
	I1028 18:30:07.363626   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.363637   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:07.363645   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:07.363706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:07.401051   67149 cri.go:89] found id: ""
	I1028 18:30:07.401073   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.401082   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:07.401089   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:07.401147   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:07.439710   67149 cri.go:89] found id: ""
	I1028 18:30:07.439735   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.439743   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:07.439748   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:07.439796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:07.476627   67149 cri.go:89] found id: ""
	I1028 18:30:07.476653   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.476663   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:07.476670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:07.476747   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:07.508770   67149 cri.go:89] found id: ""
	I1028 18:30:07.508796   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.508807   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:07.508814   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:07.508874   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:07.543467   67149 cri.go:89] found id: ""
	I1028 18:30:07.543496   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.543506   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:07.543514   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:07.543575   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:07.577181   67149 cri.go:89] found id: ""
	I1028 18:30:07.577204   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.577212   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:07.577217   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:07.577266   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:07.611862   67149 cri.go:89] found id: ""
	I1028 18:30:07.611886   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.611896   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:07.611906   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:07.611924   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:07.699794   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:07.699833   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.747920   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:07.747948   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:07.797402   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:07.797434   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:07.811752   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:07.811778   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:07.881604   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.382191   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:10.394572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:10.394624   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:10.428941   67149 cri.go:89] found id: ""
	I1028 18:30:10.428973   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.428984   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:10.429004   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:10.429071   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:10.462526   67149 cri.go:89] found id: ""
	I1028 18:30:10.462558   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.462569   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:10.462578   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:10.462641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:10.498472   67149 cri.go:89] found id: ""
	I1028 18:30:10.498495   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.498503   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:10.498509   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:10.498557   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:10.535400   67149 cri.go:89] found id: ""
	I1028 18:30:10.535422   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.535430   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:10.535436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:10.535483   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:10.568961   67149 cri.go:89] found id: ""
	I1028 18:30:10.568981   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.568988   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:10.568994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:10.569041   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:10.601273   67149 cri.go:89] found id: ""
	I1028 18:30:10.601306   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.601318   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:10.601325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:10.601383   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:10.638093   67149 cri.go:89] found id: ""
	I1028 18:30:10.638124   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.638135   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:10.638141   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:10.638203   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:10.674624   67149 cri.go:89] found id: ""
	I1028 18:30:10.674654   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.674665   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:10.674675   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:10.674688   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:10.714568   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:10.714602   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:10.764732   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:10.764765   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:10.778111   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:10.778139   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:10.854488   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.854516   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:10.854531   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:10.113469   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.614268   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:08.401958   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:10.402159   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.402379   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:11.872789   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.372947   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:13.438803   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:13.452322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:13.452397   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:13.487337   67149 cri.go:89] found id: ""
	I1028 18:30:13.487360   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.487369   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:13.487381   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:13.487488   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:13.521992   67149 cri.go:89] found id: ""
	I1028 18:30:13.522024   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.522034   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:13.522041   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:13.522099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:13.555315   67149 cri.go:89] found id: ""
	I1028 18:30:13.555347   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.555363   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:13.555371   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:13.555431   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:13.589401   67149 cri.go:89] found id: ""
	I1028 18:30:13.589425   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.589436   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:13.589445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:13.589493   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:13.629340   67149 cri.go:89] found id: ""
	I1028 18:30:13.629370   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.629385   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:13.629393   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:13.629454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:13.667307   67149 cri.go:89] found id: ""
	I1028 18:30:13.667337   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.667348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:13.667355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:13.667418   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:13.701457   67149 cri.go:89] found id: ""
	I1028 18:30:13.701513   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.701526   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:13.701536   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:13.701594   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:13.737989   67149 cri.go:89] found id: ""
	I1028 18:30:13.738023   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.738033   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:13.738043   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:13.738056   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:13.791743   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:13.791777   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:13.805501   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:13.805529   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:13.882239   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:13.882262   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:13.882276   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.963480   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:13.963516   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:15.112587   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:17.113242   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.901879   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.902869   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.871650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:18.872448   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.502799   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:16.516397   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:16.516456   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:16.551670   67149 cri.go:89] found id: ""
	I1028 18:30:16.551701   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.551712   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:16.551719   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:16.551771   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:16.584390   67149 cri.go:89] found id: ""
	I1028 18:30:16.584417   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.584428   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:16.584435   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:16.584510   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:16.620868   67149 cri.go:89] found id: ""
	I1028 18:30:16.620892   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.620899   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:16.620904   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:16.620949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:16.654189   67149 cri.go:89] found id: ""
	I1028 18:30:16.654216   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.654225   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:16.654231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:16.654284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:16.694526   67149 cri.go:89] found id: ""
	I1028 18:30:16.694557   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.694568   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:16.694575   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:16.694640   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:16.728857   67149 cri.go:89] found id: ""
	I1028 18:30:16.728884   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.728892   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:16.728898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:16.728948   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:16.763198   67149 cri.go:89] found id: ""
	I1028 18:30:16.763220   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.763227   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:16.763232   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:16.763282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:16.800120   67149 cri.go:89] found id: ""
	I1028 18:30:16.800142   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.800149   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:16.800157   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:16.800167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:16.852710   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:16.852736   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:16.867365   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:16.867395   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:16.945605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:16.945627   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:16.945643   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:17.022838   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:17.022871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.563585   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:19.577612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:19.577683   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:19.615797   67149 cri.go:89] found id: ""
	I1028 18:30:19.615820   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.615829   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:19.615836   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:19.615882   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:19.654780   67149 cri.go:89] found id: ""
	I1028 18:30:19.654802   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.654810   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:19.654816   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:19.654873   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:19.693502   67149 cri.go:89] found id: ""
	I1028 18:30:19.693532   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.693542   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:19.693550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:19.693611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:19.731869   67149 cri.go:89] found id: ""
	I1028 18:30:19.731902   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.731910   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:19.731916   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:19.731974   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:19.765046   67149 cri.go:89] found id: ""
	I1028 18:30:19.765081   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.765092   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:19.765099   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:19.765158   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:19.798082   67149 cri.go:89] found id: ""
	I1028 18:30:19.798105   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.798113   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:19.798119   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:19.798172   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:19.832562   67149 cri.go:89] found id: ""
	I1028 18:30:19.832590   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.832601   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:19.832608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:19.832676   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:19.867213   67149 cri.go:89] found id: ""
	I1028 18:30:19.867240   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.867251   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:19.867260   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:19.867277   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:19.942276   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:19.942304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.977642   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:19.977671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:20.027077   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:20.027109   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:20.040159   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:20.040181   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:20.113350   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:19.113850   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.613505   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:19.402671   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.902317   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.372438   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.872137   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:22.614379   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:22.628550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:22.628607   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:22.662647   67149 cri.go:89] found id: ""
	I1028 18:30:22.662670   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.662677   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:22.662683   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:22.662732   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:22.696697   67149 cri.go:89] found id: ""
	I1028 18:30:22.696736   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.696747   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:22.696753   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:22.696815   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:22.730011   67149 cri.go:89] found id: ""
	I1028 18:30:22.730039   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.730049   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:22.730056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:22.730114   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:22.766604   67149 cri.go:89] found id: ""
	I1028 18:30:22.766629   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.766639   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:22.766647   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:22.766703   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:22.800581   67149 cri.go:89] found id: ""
	I1028 18:30:22.800608   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.800617   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:22.800625   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:22.800692   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:22.832742   67149 cri.go:89] found id: ""
	I1028 18:30:22.832767   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.832775   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:22.832780   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:22.832823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:22.865850   67149 cri.go:89] found id: ""
	I1028 18:30:22.865876   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.865885   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:22.865892   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:22.865949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:22.904410   67149 cri.go:89] found id: ""
	I1028 18:30:22.904433   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.904443   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:22.904454   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:22.904482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:22.959275   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:22.959310   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:22.972630   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:22.972652   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:23.043851   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:23.043873   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:23.043886   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:23.121657   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:23.121686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:25.662109   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:25.676366   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:25.676443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:25.715192   67149 cri.go:89] found id: ""
	I1028 18:30:25.715216   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.715224   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:25.715230   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:25.715283   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:25.754736   67149 cri.go:89] found id: ""
	I1028 18:30:25.754765   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.754773   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:25.754779   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:25.754823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:25.794179   67149 cri.go:89] found id: ""
	I1028 18:30:25.794207   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.794216   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:25.794224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:25.794278   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:25.833206   67149 cri.go:89] found id: ""
	I1028 18:30:25.833238   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.833246   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:25.833252   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:25.833298   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:25.871628   67149 cri.go:89] found id: ""
	I1028 18:30:25.871659   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.871669   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:25.871677   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:25.871735   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:25.910900   67149 cri.go:89] found id: ""
	I1028 18:30:25.910924   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.910934   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:25.910942   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:25.911001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:25.943972   67149 cri.go:89] found id: ""
	I1028 18:30:25.943992   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.943999   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:25.944004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:25.944059   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:25.982521   67149 cri.go:89] found id: ""
	I1028 18:30:25.982544   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.982551   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:25.982559   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:25.982569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:26.033003   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:26.033031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:26.046480   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:26.046503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 18:30:24.112244   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.113815   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.902652   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.402135   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:25.873075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.372129   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	W1028 18:30:26.117194   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:26.117213   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:26.117230   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:26.195399   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:26.195430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:28.737237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:28.751846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:28.751910   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:28.794259   67149 cri.go:89] found id: ""
	I1028 18:30:28.794290   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.794301   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:28.794308   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:28.794374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:28.827573   67149 cri.go:89] found id: ""
	I1028 18:30:28.827603   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.827611   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:28.827616   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:28.827671   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:28.860676   67149 cri.go:89] found id: ""
	I1028 18:30:28.860702   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.860713   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:28.860721   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:28.860780   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:28.897302   67149 cri.go:89] found id: ""
	I1028 18:30:28.897327   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.897343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:28.897351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:28.897410   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:28.933425   67149 cri.go:89] found id: ""
	I1028 18:30:28.933454   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.933464   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:28.933471   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:28.933535   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:28.966004   67149 cri.go:89] found id: ""
	I1028 18:30:28.966032   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.966043   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:28.966051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:28.966107   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:29.002788   67149 cri.go:89] found id: ""
	I1028 18:30:29.002818   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.002829   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:29.002835   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:29.002894   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:29.033351   67149 cri.go:89] found id: ""
	I1028 18:30:29.033379   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.033389   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:29.033400   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:29.033420   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:29.107997   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:29.108025   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:29.144727   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:29.144753   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:29.206487   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:29.206521   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:29.219722   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:29.219744   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:29.288254   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:28.612485   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.113113   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.902960   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.871338   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.372081   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.789035   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:31.802587   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:31.802650   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:31.838372   67149 cri.go:89] found id: ""
	I1028 18:30:31.838401   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.838410   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:31.838416   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:31.838469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:31.877794   67149 cri.go:89] found id: ""
	I1028 18:30:31.877822   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.877833   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:31.877840   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:31.877896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:31.917442   67149 cri.go:89] found id: ""
	I1028 18:30:31.917472   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.917483   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:31.917490   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:31.917549   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:31.951900   67149 cri.go:89] found id: ""
	I1028 18:30:31.951931   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.951943   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:31.951951   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:31.952008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:31.988011   67149 cri.go:89] found id: ""
	I1028 18:30:31.988040   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.988051   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:31.988058   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:31.988116   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:32.021042   67149 cri.go:89] found id: ""
	I1028 18:30:32.021063   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.021071   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:32.021077   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:32.021124   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:32.053748   67149 cri.go:89] found id: ""
	I1028 18:30:32.053770   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.053778   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:32.053783   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:32.053837   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:32.089725   67149 cri.go:89] found id: ""
	I1028 18:30:32.089756   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.089766   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:32.089777   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:32.089790   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:32.140000   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:32.140031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:32.154023   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:32.154046   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:32.231222   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:32.231242   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:32.231255   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:32.311354   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:32.311388   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:34.852507   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:34.867133   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:34.867198   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:34.901201   67149 cri.go:89] found id: ""
	I1028 18:30:34.901228   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.901238   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:34.901245   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:34.901300   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:34.962788   67149 cri.go:89] found id: ""
	I1028 18:30:34.962814   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.962824   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:34.962835   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:34.962896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:34.996879   67149 cri.go:89] found id: ""
	I1028 18:30:34.996906   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.996917   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:34.996926   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:34.996986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:35.033516   67149 cri.go:89] found id: ""
	I1028 18:30:35.033541   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.033553   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:35.033560   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:35.033622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:35.066903   67149 cri.go:89] found id: ""
	I1028 18:30:35.066933   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.066945   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:35.066953   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:35.067010   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:35.099675   67149 cri.go:89] found id: ""
	I1028 18:30:35.099697   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.099704   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:35.099710   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:35.099755   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:35.133595   67149 cri.go:89] found id: ""
	I1028 18:30:35.133623   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.133633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:35.133641   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:35.133699   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:35.172236   67149 cri.go:89] found id: ""
	I1028 18:30:35.172262   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.172272   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:35.172282   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:35.172296   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:35.224952   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:35.224981   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:35.238554   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:35.238578   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:35.318991   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:35.319024   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:35.319040   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:35.399763   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:35.399799   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:33.612446   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.613847   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.402375   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.402653   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.902346   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:38.372413   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.947847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:37.963147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:37.963210   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.001768   67149 cri.go:89] found id: ""
	I1028 18:30:38.001792   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.001802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:38.001809   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:38.001868   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:38.042877   67149 cri.go:89] found id: ""
	I1028 18:30:38.042905   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.042916   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:38.042924   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:38.042986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:38.078116   67149 cri.go:89] found id: ""
	I1028 18:30:38.078143   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.078154   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:38.078162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:38.078226   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:38.111082   67149 cri.go:89] found id: ""
	I1028 18:30:38.111108   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.111119   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:38.111127   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:38.111187   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:38.144863   67149 cri.go:89] found id: ""
	I1028 18:30:38.144889   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.144898   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:38.144906   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:38.144962   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:38.178671   67149 cri.go:89] found id: ""
	I1028 18:30:38.178701   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.178712   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:38.178719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:38.178774   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:38.218441   67149 cri.go:89] found id: ""
	I1028 18:30:38.218464   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.218472   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:38.218477   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:38.218528   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:38.252697   67149 cri.go:89] found id: ""
	I1028 18:30:38.252719   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.252727   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:38.252736   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:38.252745   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:38.304813   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:38.304853   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:38.318437   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:38.318462   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:38.389959   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:38.389987   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:38.390002   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:38.471462   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:38.471495   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:41.013647   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:41.027167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:41.027233   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.113426   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:39.903261   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.402381   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.871193   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.873502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:41.062559   67149 cri.go:89] found id: ""
	I1028 18:30:41.062590   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.062601   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:41.062609   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:41.062667   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:41.097732   67149 cri.go:89] found id: ""
	I1028 18:30:41.097758   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.097767   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:41.097773   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:41.097819   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:41.133067   67149 cri.go:89] found id: ""
	I1028 18:30:41.133089   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.133097   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:41.133102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:41.133150   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:41.168640   67149 cri.go:89] found id: ""
	I1028 18:30:41.168674   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.168684   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:41.168691   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:41.168754   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:41.206429   67149 cri.go:89] found id: ""
	I1028 18:30:41.206453   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.206463   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:41.206470   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:41.206527   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:41.248326   67149 cri.go:89] found id: ""
	I1028 18:30:41.248350   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.248360   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:41.248369   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:41.248429   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:41.283703   67149 cri.go:89] found id: ""
	I1028 18:30:41.283734   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.283746   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:41.283753   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:41.283810   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:41.327759   67149 cri.go:89] found id: ""
	I1028 18:30:41.327786   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.327796   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:41.327807   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:41.327820   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:41.388563   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:41.388593   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:41.406411   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:41.406435   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:41.490605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:41.490626   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:41.490637   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:41.569386   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:41.569433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.109394   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:44.123047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:44.123113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:44.156762   67149 cri.go:89] found id: ""
	I1028 18:30:44.156792   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.156802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:44.156810   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:44.156867   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:44.192244   67149 cri.go:89] found id: ""
	I1028 18:30:44.192271   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.192282   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:44.192289   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:44.192357   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:44.224059   67149 cri.go:89] found id: ""
	I1028 18:30:44.224094   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.224101   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:44.224115   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:44.224168   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:44.258750   67149 cri.go:89] found id: ""
	I1028 18:30:44.258779   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.258789   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:44.258797   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:44.258854   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:44.295600   67149 cri.go:89] found id: ""
	I1028 18:30:44.295624   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.295632   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:44.295638   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:44.295684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:44.327278   67149 cri.go:89] found id: ""
	I1028 18:30:44.327302   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.327309   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:44.327315   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:44.327370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:44.360734   67149 cri.go:89] found id: ""
	I1028 18:30:44.360760   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.360768   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:44.360774   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:44.360822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:44.398198   67149 cri.go:89] found id: ""
	I1028 18:30:44.398224   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.398234   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:44.398249   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:44.398261   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:44.476135   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:44.476167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.514073   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:44.514105   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:44.563001   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:44.563033   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:44.576882   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:44.576912   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:44.648532   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:43.112043   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.113135   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.113382   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:44.403147   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:46.902890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.370854   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.371758   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.373946   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.149133   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:47.165612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:47.165696   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:47.203960   67149 cri.go:89] found id: ""
	I1028 18:30:47.203987   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.203996   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:47.204002   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:47.204065   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:47.236731   67149 cri.go:89] found id: ""
	I1028 18:30:47.236757   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.236766   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:47.236774   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:47.236828   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:47.273779   67149 cri.go:89] found id: ""
	I1028 18:30:47.273808   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.273820   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:47.273826   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:47.273878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:47.309996   67149 cri.go:89] found id: ""
	I1028 18:30:47.310020   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.310028   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:47.310034   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:47.310108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:47.352904   67149 cri.go:89] found id: ""
	I1028 18:30:47.352925   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.352934   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:47.352939   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:47.352990   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:47.389641   67149 cri.go:89] found id: ""
	I1028 18:30:47.389660   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.389667   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:47.389672   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:47.389718   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:47.422591   67149 cri.go:89] found id: ""
	I1028 18:30:47.422622   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.422632   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:47.422639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:47.422694   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:47.454849   67149 cri.go:89] found id: ""
	I1028 18:30:47.454876   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.454886   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:47.454895   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:47.454916   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:47.506176   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:47.506203   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:47.519084   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:47.519108   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:47.585660   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.585681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:47.585696   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:47.664904   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:47.664939   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:50.203775   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:50.216923   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:50.216992   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:50.252506   67149 cri.go:89] found id: ""
	I1028 18:30:50.252531   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.252541   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:50.252548   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:50.252608   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:50.288641   67149 cri.go:89] found id: ""
	I1028 18:30:50.288669   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.288678   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:50.288684   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:50.288739   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:50.322130   67149 cri.go:89] found id: ""
	I1028 18:30:50.322163   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.322174   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:50.322182   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:50.322240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:50.359508   67149 cri.go:89] found id: ""
	I1028 18:30:50.359536   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.359546   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:50.359554   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:50.359617   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:50.393571   67149 cri.go:89] found id: ""
	I1028 18:30:50.393607   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.393618   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:50.393626   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:50.393685   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:50.428683   67149 cri.go:89] found id: ""
	I1028 18:30:50.428705   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.428713   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:50.428719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:50.428767   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:50.464086   67149 cri.go:89] found id: ""
	I1028 18:30:50.464111   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.464119   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:50.464125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:50.464183   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:50.496695   67149 cri.go:89] found id: ""
	I1028 18:30:50.496726   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.496736   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:50.496745   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:50.496755   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:50.545495   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:50.545526   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:50.558819   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:50.558852   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:50.636344   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:50.636369   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:50.636384   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:50.720270   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:50.720304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:49.612927   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.613353   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.402779   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.901517   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.873490   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:54.372373   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.261194   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:53.274451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:53.274507   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:53.306258   67149 cri.go:89] found id: ""
	I1028 18:30:53.306286   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.306295   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:53.306301   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:53.306362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:53.340222   67149 cri.go:89] found id: ""
	I1028 18:30:53.340244   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.340253   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:53.340258   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:53.340322   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:53.377726   67149 cri.go:89] found id: ""
	I1028 18:30:53.377750   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.377760   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:53.377767   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:53.377820   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:53.414228   67149 cri.go:89] found id: ""
	I1028 18:30:53.414252   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.414262   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:53.414275   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:53.414332   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:53.449152   67149 cri.go:89] found id: ""
	I1028 18:30:53.449179   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.449186   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:53.449192   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:53.449237   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:53.485678   67149 cri.go:89] found id: ""
	I1028 18:30:53.485705   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.485716   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:53.485723   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:53.485784   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:53.520764   67149 cri.go:89] found id: ""
	I1028 18:30:53.520791   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.520802   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:53.520810   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:53.520870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:53.561153   67149 cri.go:89] found id: ""
	I1028 18:30:53.561176   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.561184   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:53.561192   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:53.561202   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:53.642192   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:53.642242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.686527   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:53.686567   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:53.740815   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:53.740849   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:53.754577   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:53.754604   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:53.823717   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:54.112985   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.612820   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.903128   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:55.903482   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.372798   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.871814   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.324847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:56.338572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:56.338628   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:56.375482   67149 cri.go:89] found id: ""
	I1028 18:30:56.375506   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.375517   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:56.375524   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:56.375580   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:56.407894   67149 cri.go:89] found id: ""
	I1028 18:30:56.407921   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.407931   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:56.407938   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:56.407993   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:56.447006   67149 cri.go:89] found id: ""
	I1028 18:30:56.447037   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.447048   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:56.447055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:56.447112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:56.483850   67149 cri.go:89] found id: ""
	I1028 18:30:56.483880   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.483890   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:56.483898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:56.483958   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:56.520008   67149 cri.go:89] found id: ""
	I1028 18:30:56.520038   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.520045   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:56.520051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:56.520099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:56.552567   67149 cri.go:89] found id: ""
	I1028 18:30:56.552592   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.552600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:56.552608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:56.552658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:56.591277   67149 cri.go:89] found id: ""
	I1028 18:30:56.591297   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.591305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:56.591311   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:56.591362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:56.632164   67149 cri.go:89] found id: ""
	I1028 18:30:56.632186   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.632194   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:56.632202   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:56.632219   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:56.683590   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:56.683623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:56.698509   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:56.698539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:56.777141   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.777171   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:56.777188   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:56.851801   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:56.851842   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.394266   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:59.408460   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:59.408545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:59.444066   67149 cri.go:89] found id: ""
	I1028 18:30:59.444092   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.444104   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:59.444112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:59.444165   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:59.479531   67149 cri.go:89] found id: ""
	I1028 18:30:59.479557   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.479568   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:59.479576   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:59.479622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:59.519467   67149 cri.go:89] found id: ""
	I1028 18:30:59.519489   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.519496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:59.519502   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:59.519546   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:59.551108   67149 cri.go:89] found id: ""
	I1028 18:30:59.551131   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.551140   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:59.551146   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:59.551197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:59.585875   67149 cri.go:89] found id: ""
	I1028 18:30:59.585899   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.585907   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:59.585912   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:59.585968   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:59.620571   67149 cri.go:89] found id: ""
	I1028 18:30:59.620595   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.620602   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:59.620608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:59.620655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:59.653927   67149 cri.go:89] found id: ""
	I1028 18:30:59.653954   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.653965   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:59.653972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:59.654039   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:59.689138   67149 cri.go:89] found id: ""
	I1028 18:30:59.689160   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.689168   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:59.689175   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:59.689185   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:59.768231   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:59.768270   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.811980   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:59.812007   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:59.864509   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:59.864543   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:59.879329   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:59.879354   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:59.950134   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:59.112280   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:01.113341   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.402845   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.902628   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.904642   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.872873   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:03.371672   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.450237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:02.464689   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:02.464765   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:02.500938   67149 cri.go:89] found id: ""
	I1028 18:31:02.500964   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.500975   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:02.500982   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:02.501043   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:02.534580   67149 cri.go:89] found id: ""
	I1028 18:31:02.534608   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.534620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:02.534628   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:02.534684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:02.570203   67149 cri.go:89] found id: ""
	I1028 18:31:02.570224   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.570231   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:02.570237   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:02.570284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:02.606037   67149 cri.go:89] found id: ""
	I1028 18:31:02.606064   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.606072   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:02.606082   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:02.606135   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:02.640622   67149 cri.go:89] found id: ""
	I1028 18:31:02.640646   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.640656   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:02.640663   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:02.640723   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:02.676406   67149 cri.go:89] found id: ""
	I1028 18:31:02.676434   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.676444   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:02.676451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:02.676520   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:02.710284   67149 cri.go:89] found id: ""
	I1028 18:31:02.710308   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.710316   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:02.710322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:02.710376   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:02.750853   67149 cri.go:89] found id: ""
	I1028 18:31:02.750899   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.750910   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:02.750918   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:02.750929   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:02.825886   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.825913   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:02.825927   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:02.904828   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:02.904857   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:02.941886   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:02.941922   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:02.991603   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:02.991632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.505655   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:05.520582   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:05.520638   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:05.558724   67149 cri.go:89] found id: ""
	I1028 18:31:05.558753   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.558763   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:05.558770   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:05.558816   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:05.597864   67149 cri.go:89] found id: ""
	I1028 18:31:05.597885   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.597893   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:05.597898   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:05.597956   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:05.643571   67149 cri.go:89] found id: ""
	I1028 18:31:05.643602   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.643613   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:05.643620   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:05.643679   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:05.682010   67149 cri.go:89] found id: ""
	I1028 18:31:05.682039   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.682048   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:05.682053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:05.682106   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:05.716043   67149 cri.go:89] found id: ""
	I1028 18:31:05.716067   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.716080   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:05.716086   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:05.716134   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:05.750962   67149 cri.go:89] found id: ""
	I1028 18:31:05.750995   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.751010   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:05.751016   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:05.751078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:05.785059   67149 cri.go:89] found id: ""
	I1028 18:31:05.785111   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.785124   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:05.785132   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:05.785193   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:05.833525   67149 cri.go:89] found id: ""
	I1028 18:31:05.833550   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.833559   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:05.833566   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:05.833579   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:05.887766   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:05.887796   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.902575   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:05.902606   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:05.975082   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:05.975108   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:05.975122   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:03.613265   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.114362   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.402167   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:07.402252   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.873147   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:08.370748   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.050369   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:06.050396   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.593506   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:08.606188   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:08.606251   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:08.645186   67149 cri.go:89] found id: ""
	I1028 18:31:08.645217   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.645227   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:08.645235   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:08.645294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:08.680728   67149 cri.go:89] found id: ""
	I1028 18:31:08.680759   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.680771   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:08.680778   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:08.680833   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:08.714733   67149 cri.go:89] found id: ""
	I1028 18:31:08.714760   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.714772   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:08.714779   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:08.714844   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:08.750293   67149 cri.go:89] found id: ""
	I1028 18:31:08.750323   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.750333   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:08.750339   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:08.750390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:08.784521   67149 cri.go:89] found id: ""
	I1028 18:31:08.784548   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.784559   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:08.784566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:08.784629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:08.818808   67149 cri.go:89] found id: ""
	I1028 18:31:08.818838   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.818849   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:08.818857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:08.818920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:08.855575   67149 cri.go:89] found id: ""
	I1028 18:31:08.855608   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.855619   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:08.855633   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:08.855690   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:08.892996   67149 cri.go:89] found id: ""
	I1028 18:31:08.893024   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.893035   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:08.893045   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:08.893064   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.937056   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:08.937084   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:08.989013   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:08.989048   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:09.002048   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:09.002077   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:09.075247   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:09.075277   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:09.075290   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:08.612396   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.612689   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:09.402595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.903403   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.371335   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:12.371435   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.371502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.654701   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:11.668066   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:11.668146   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:11.701666   67149 cri.go:89] found id: ""
	I1028 18:31:11.701693   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.701703   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:11.701710   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:11.701769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:11.738342   67149 cri.go:89] found id: ""
	I1028 18:31:11.738368   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.738376   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:11.738381   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:11.738428   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:11.772009   67149 cri.go:89] found id: ""
	I1028 18:31:11.772035   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.772045   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:11.772053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:11.772118   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:11.816210   67149 cri.go:89] found id: ""
	I1028 18:31:11.816237   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.816245   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:11.816251   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:11.816314   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:11.856675   67149 cri.go:89] found id: ""
	I1028 18:31:11.856704   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.856714   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:11.856722   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:11.856785   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:11.896566   67149 cri.go:89] found id: ""
	I1028 18:31:11.896592   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.896600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:11.896606   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:11.896665   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:11.932599   67149 cri.go:89] found id: ""
	I1028 18:31:11.932624   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.932633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:11.932640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:11.932704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:11.966952   67149 cri.go:89] found id: ""
	I1028 18:31:11.966982   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.967008   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:11.967019   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:11.967037   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:12.016465   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:12.016502   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:12.029314   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:12.029343   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:12.098906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:12.098936   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:12.098954   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:12.176440   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:12.176489   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:14.720173   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:14.733796   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:14.733848   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:14.774072   67149 cri.go:89] found id: ""
	I1028 18:31:14.774093   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.774100   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:14.774106   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:14.774152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:14.816116   67149 cri.go:89] found id: ""
	I1028 18:31:14.816145   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.816158   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:14.816166   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:14.816224   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:14.851167   67149 cri.go:89] found id: ""
	I1028 18:31:14.851189   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.851196   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:14.851202   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:14.851247   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:14.885887   67149 cri.go:89] found id: ""
	I1028 18:31:14.885918   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.885926   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:14.885931   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:14.885976   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:14.923787   67149 cri.go:89] found id: ""
	I1028 18:31:14.923815   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.923826   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:14.923833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:14.923892   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:14.960117   67149 cri.go:89] found id: ""
	I1028 18:31:14.960148   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.960160   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:14.960167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:14.960240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:14.998418   67149 cri.go:89] found id: ""
	I1028 18:31:14.998458   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.998470   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:14.998485   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:14.998545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:15.031985   67149 cri.go:89] found id: ""
	I1028 18:31:15.032005   67149 logs.go:282] 0 containers: []
	W1028 18:31:15.032014   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:15.032027   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:15.032038   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:15.045239   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:15.045264   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:15.118954   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:15.118978   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:15.118994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:15.200538   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:15.200569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:15.243581   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:15.243603   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:13.112232   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:15.113498   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.612946   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.401769   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.402729   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.871916   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.872378   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.794670   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:17.808325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:17.808380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:17.841888   67149 cri.go:89] found id: ""
	I1028 18:31:17.841911   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.841919   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:17.841925   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:17.841979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:17.881241   67149 cri.go:89] found id: ""
	I1028 18:31:17.881261   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.881269   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:17.881274   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:17.881331   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:17.922394   67149 cri.go:89] found id: ""
	I1028 18:31:17.922419   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.922428   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:17.922434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:17.922498   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:17.963519   67149 cri.go:89] found id: ""
	I1028 18:31:17.963546   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.963558   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:17.963566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:17.963641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:18.003181   67149 cri.go:89] found id: ""
	I1028 18:31:18.003202   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.003209   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:18.003214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:18.003261   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:18.040305   67149 cri.go:89] found id: ""
	I1028 18:31:18.040338   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.040348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:18.040356   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:18.040413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:18.077671   67149 cri.go:89] found id: ""
	I1028 18:31:18.077696   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.077708   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:18.077715   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:18.077777   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:18.116155   67149 cri.go:89] found id: ""
	I1028 18:31:18.116176   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.116182   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:18.116190   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:18.116201   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:18.168343   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:18.168370   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:18.181962   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:18.181988   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:18.260227   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:18.260251   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:18.260265   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:18.346588   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:18.346620   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:20.885832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:20.899053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:20.899121   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:20.954770   67149 cri.go:89] found id: ""
	I1028 18:31:20.954797   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.954806   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:20.954812   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:20.954870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:20.989809   67149 cri.go:89] found id: ""
	I1028 18:31:20.989834   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.989842   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:20.989848   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:20.989900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:21.027150   67149 cri.go:89] found id: ""
	I1028 18:31:21.027179   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.027191   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:21.027199   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:21.027259   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:20.113283   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:22.612710   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.902738   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.403607   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.371574   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.871000   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.061235   67149 cri.go:89] found id: ""
	I1028 18:31:21.061260   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.061270   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:21.061277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:21.061337   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:21.095451   67149 cri.go:89] found id: ""
	I1028 18:31:21.095473   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.095481   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:21.095487   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:21.095540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:21.135576   67149 cri.go:89] found id: ""
	I1028 18:31:21.135598   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.135606   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:21.135612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:21.135660   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:21.170816   67149 cri.go:89] found id: ""
	I1028 18:31:21.170845   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.170854   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:21.170860   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:21.170920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:21.204616   67149 cri.go:89] found id: ""
	I1028 18:31:21.204649   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.204660   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:21.204672   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:21.204686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:21.254523   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:21.254556   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:21.267981   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:21.268005   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:21.336786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:21.336813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:21.336828   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:21.420596   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:21.420625   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:23.962346   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:23.976628   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:23.976697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:24.016418   67149 cri.go:89] found id: ""
	I1028 18:31:24.016444   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.016453   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:24.016458   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:24.016533   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:24.051448   67149 cri.go:89] found id: ""
	I1028 18:31:24.051474   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.051483   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:24.051488   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:24.051554   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:24.090787   67149 cri.go:89] found id: ""
	I1028 18:31:24.090816   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.090829   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:24.090836   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:24.090900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:24.126315   67149 cri.go:89] found id: ""
	I1028 18:31:24.126342   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.126349   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:24.126355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:24.126402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:24.161340   67149 cri.go:89] found id: ""
	I1028 18:31:24.161367   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.161379   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:24.161387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:24.161445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:24.195991   67149 cri.go:89] found id: ""
	I1028 18:31:24.196017   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.196028   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:24.196036   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:24.196084   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:24.229789   67149 cri.go:89] found id: ""
	I1028 18:31:24.229822   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.229837   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:24.229845   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:24.229906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:24.264724   67149 cri.go:89] found id: ""
	I1028 18:31:24.264748   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.264757   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:24.264765   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:24.264775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:24.303551   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:24.303574   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:24.351693   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:24.351725   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:24.364537   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:24.364566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:24.436935   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:24.436955   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:24.436966   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:25.112870   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.612492   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.902008   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.902544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.902622   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.871089   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.871265   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:29.872201   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.014928   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:27.029540   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:27.029609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:27.064598   67149 cri.go:89] found id: ""
	I1028 18:31:27.064626   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.064636   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:27.064643   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:27.064704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:27.099432   67149 cri.go:89] found id: ""
	I1028 18:31:27.099455   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.099465   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:27.099472   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:27.099531   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:27.133961   67149 cri.go:89] found id: ""
	I1028 18:31:27.133996   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.134006   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:27.134012   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:27.134075   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:27.171976   67149 cri.go:89] found id: ""
	I1028 18:31:27.172003   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.172014   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:27.172022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:27.172092   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:27.205681   67149 cri.go:89] found id: ""
	I1028 18:31:27.205710   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.205721   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:27.205730   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:27.205793   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:27.244571   67149 cri.go:89] found id: ""
	I1028 18:31:27.244603   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.244612   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:27.244617   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:27.244661   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:27.281692   67149 cri.go:89] found id: ""
	I1028 18:31:27.281722   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.281738   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:27.281746   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:27.281800   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:27.335003   67149 cri.go:89] found id: ""
	I1028 18:31:27.335033   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.335041   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:27.335049   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:27.335066   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:27.353992   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:27.354017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:27.457103   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:27.457125   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:27.457136   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.537717   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:27.537746   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:27.579842   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:27.579870   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.133749   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:30.147518   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:30.147576   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:30.182683   67149 cri.go:89] found id: ""
	I1028 18:31:30.182711   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.182722   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:30.182729   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:30.182792   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:30.215088   67149 cri.go:89] found id: ""
	I1028 18:31:30.215109   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.215118   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:30.215124   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:30.215176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:30.250169   67149 cri.go:89] found id: ""
	I1028 18:31:30.250194   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.250202   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:30.250207   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:30.250284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:30.286028   67149 cri.go:89] found id: ""
	I1028 18:31:30.286055   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.286062   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:30.286069   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:30.286112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:30.320503   67149 cri.go:89] found id: ""
	I1028 18:31:30.320528   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.320539   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:30.320547   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:30.320604   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:30.352773   67149 cri.go:89] found id: ""
	I1028 18:31:30.352793   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.352800   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:30.352806   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:30.352859   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:30.385922   67149 cri.go:89] found id: ""
	I1028 18:31:30.385944   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.385951   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:30.385956   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:30.385999   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:30.421909   67149 cri.go:89] found id: ""
	I1028 18:31:30.421933   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.421945   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:30.421956   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:30.421971   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.470917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:30.470944   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:30.484033   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:30.484059   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:30.554810   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:30.554836   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:30.554850   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:30.634403   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:30.634432   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:30.113496   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.613397   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:30.402688   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.902277   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:31.872598   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:34.371198   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:33.182127   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:33.194994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:33.195063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:33.233076   67149 cri.go:89] found id: ""
	I1028 18:31:33.233098   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.233106   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:33.233112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:33.233160   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:33.266963   67149 cri.go:89] found id: ""
	I1028 18:31:33.266998   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.267021   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:33.267028   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:33.267083   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:33.305888   67149 cri.go:89] found id: ""
	I1028 18:31:33.305914   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.305922   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:33.305928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:33.305979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:33.339451   67149 cri.go:89] found id: ""
	I1028 18:31:33.339479   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.339489   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:33.339496   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:33.339555   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:33.375038   67149 cri.go:89] found id: ""
	I1028 18:31:33.375065   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.375073   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:33.375079   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:33.375125   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:33.409157   67149 cri.go:89] found id: ""
	I1028 18:31:33.409176   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.409183   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:33.409189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:33.409243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:33.449108   67149 cri.go:89] found id: ""
	I1028 18:31:33.449133   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.449149   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:33.449155   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:33.449227   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:33.491194   67149 cri.go:89] found id: ""
	I1028 18:31:33.491215   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.491224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:33.491232   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:33.491250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.530590   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:33.530618   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:33.581933   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:33.581962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:33.595387   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:33.595416   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:33.664855   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:33.664882   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:33.664899   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:35.113673   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.612606   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:35.401938   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.402270   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.372499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:38.372670   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.242724   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:36.256152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:36.256221   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:36.292452   67149 cri.go:89] found id: ""
	I1028 18:31:36.292494   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.292504   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:36.292511   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:36.292568   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:36.325210   67149 cri.go:89] found id: ""
	I1028 18:31:36.325231   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.325238   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:36.325244   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:36.325293   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:36.356738   67149 cri.go:89] found id: ""
	I1028 18:31:36.356757   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.356764   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:36.356769   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:36.356827   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:36.389678   67149 cri.go:89] found id: ""
	I1028 18:31:36.389704   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.389712   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:36.389717   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:36.389775   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:36.422956   67149 cri.go:89] found id: ""
	I1028 18:31:36.422989   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.422998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:36.423005   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:36.423061   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:36.456877   67149 cri.go:89] found id: ""
	I1028 18:31:36.456904   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.456914   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:36.456921   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:36.456983   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:36.489728   67149 cri.go:89] found id: ""
	I1028 18:31:36.489758   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.489766   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:36.489772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:36.489829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:36.524307   67149 cri.go:89] found id: ""
	I1028 18:31:36.524338   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.524350   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:36.524360   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:36.524372   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:36.574771   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:36.574800   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:36.587485   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:36.587506   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:36.655922   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:36.655949   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:36.655962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.738312   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:36.738352   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.279425   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:39.293108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:39.293167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:39.325542   67149 cri.go:89] found id: ""
	I1028 18:31:39.325573   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.325584   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:39.325592   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:39.325656   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:39.357581   67149 cri.go:89] found id: ""
	I1028 18:31:39.357609   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.357620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:39.357627   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:39.357681   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:39.394833   67149 cri.go:89] found id: ""
	I1028 18:31:39.394853   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.394860   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:39.394866   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:39.394916   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:39.430151   67149 cri.go:89] found id: ""
	I1028 18:31:39.430178   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.430188   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:39.430196   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:39.430254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:39.468060   67149 cri.go:89] found id: ""
	I1028 18:31:39.468089   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.468100   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:39.468108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:39.468181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:39.503702   67149 cri.go:89] found id: ""
	I1028 18:31:39.503734   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.503752   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:39.503761   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:39.503829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:39.536193   67149 cri.go:89] found id: ""
	I1028 18:31:39.536221   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.536233   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:39.536240   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:39.536305   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:39.570194   67149 cri.go:89] found id: ""
	I1028 18:31:39.570215   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.570224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:39.570232   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:39.570245   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:39.647179   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:39.647207   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:39.647220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:39.725980   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:39.726012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.765671   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:39.765704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:39.818257   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:39.818289   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:39.614055   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.112561   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:39.902061   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.402314   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:40.871483   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.872270   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.332335   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:42.344964   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:42.345031   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:42.380904   67149 cri.go:89] found id: ""
	I1028 18:31:42.380926   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.380933   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:42.380938   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:42.380982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:42.414361   67149 cri.go:89] found id: ""
	I1028 18:31:42.414385   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.414393   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:42.414399   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:42.414443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:42.447931   67149 cri.go:89] found id: ""
	I1028 18:31:42.447957   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.447968   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:42.447975   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:42.448024   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:42.483262   67149 cri.go:89] found id: ""
	I1028 18:31:42.483283   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.483296   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:42.483301   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:42.483365   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:42.516665   67149 cri.go:89] found id: ""
	I1028 18:31:42.516693   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.516702   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:42.516709   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:42.516776   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:42.550160   67149 cri.go:89] found id: ""
	I1028 18:31:42.550181   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.550188   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:42.550193   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:42.550238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:42.583509   67149 cri.go:89] found id: ""
	I1028 18:31:42.583535   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.583546   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:42.583552   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:42.583611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:42.619276   67149 cri.go:89] found id: ""
	I1028 18:31:42.619312   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.619320   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:42.619328   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:42.619338   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:42.692442   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:42.692487   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:42.731768   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:42.731798   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:42.783997   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:42.784043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.797809   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:42.797834   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:42.863351   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.363648   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:45.376277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:45.376341   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:45.415231   67149 cri.go:89] found id: ""
	I1028 18:31:45.415255   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.415265   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:45.415273   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:45.415330   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:45.451133   67149 cri.go:89] found id: ""
	I1028 18:31:45.451157   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.451164   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:45.451170   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:45.451228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:45.483526   67149 cri.go:89] found id: ""
	I1028 18:31:45.483552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.483562   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:45.483567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:45.483621   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:45.515799   67149 cri.go:89] found id: ""
	I1028 18:31:45.515828   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.515838   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:45.515846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:45.515906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:45.548043   67149 cri.go:89] found id: ""
	I1028 18:31:45.548071   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.548082   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:45.548090   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:45.548153   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:45.581525   67149 cri.go:89] found id: ""
	I1028 18:31:45.581552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.581563   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:45.581570   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:45.581629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:45.622258   67149 cri.go:89] found id: ""
	I1028 18:31:45.622282   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.622290   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:45.622296   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:45.622353   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:45.661255   67149 cri.go:89] found id: ""
	I1028 18:31:45.661275   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.661284   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:45.661292   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:45.661304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:45.675209   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:45.675242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:45.737546   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.737573   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:45.737592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:45.816012   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:45.816043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:45.854135   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:45.854167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:44.612155   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.612875   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:44.402557   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.902339   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:45.371918   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:47.872710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.875644   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:48.406233   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:48.418950   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:48.419001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:48.452933   67149 cri.go:89] found id: ""
	I1028 18:31:48.452952   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.452961   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:48.452975   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:48.453034   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:48.489604   67149 cri.go:89] found id: ""
	I1028 18:31:48.489630   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.489640   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:48.489648   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:48.489706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:48.525463   67149 cri.go:89] found id: ""
	I1028 18:31:48.525493   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.525504   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:48.525511   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:48.525566   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:48.559266   67149 cri.go:89] found id: ""
	I1028 18:31:48.559294   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.559302   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:48.559308   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:48.559363   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:48.592670   67149 cri.go:89] found id: ""
	I1028 18:31:48.592695   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.592706   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:48.592714   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:48.592769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:48.627175   67149 cri.go:89] found id: ""
	I1028 18:31:48.627196   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.627205   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:48.627213   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:48.627260   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:48.661864   67149 cri.go:89] found id: ""
	I1028 18:31:48.661887   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.661895   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:48.661901   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:48.661946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:48.696731   67149 cri.go:89] found id: ""
	I1028 18:31:48.696756   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.696765   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:48.696775   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:48.696788   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.745390   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:48.745417   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:48.759218   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:48.759241   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:48.830299   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:48.830331   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:48.830349   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:48.909934   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:48.909963   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:49.112884   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.613217   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.402707   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.903103   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:52.373283   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.872603   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.451597   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:51.464889   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:51.464943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:51.499962   67149 cri.go:89] found id: ""
	I1028 18:31:51.499990   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.500001   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:51.500010   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:51.500069   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:51.532341   67149 cri.go:89] found id: ""
	I1028 18:31:51.532370   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.532380   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:51.532388   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:51.532443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:51.565531   67149 cri.go:89] found id: ""
	I1028 18:31:51.565554   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.565561   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:51.565567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:51.565614   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:51.602859   67149 cri.go:89] found id: ""
	I1028 18:31:51.602882   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.602894   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:51.602899   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:51.602943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:51.639896   67149 cri.go:89] found id: ""
	I1028 18:31:51.639915   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.639922   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:51.639928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:51.639972   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:51.675728   67149 cri.go:89] found id: ""
	I1028 18:31:51.675755   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.675762   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:51.675768   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:51.675825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:51.710285   67149 cri.go:89] found id: ""
	I1028 18:31:51.710312   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.710320   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:51.710326   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:51.710374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:51.744527   67149 cri.go:89] found id: ""
	I1028 18:31:51.744551   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.744560   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:51.744570   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:51.744584   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.780580   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:51.780614   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:51.832979   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:51.833008   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:51.846389   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:51.846415   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:51.918177   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:51.918196   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:51.918210   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.493806   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:54.506468   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:54.506526   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:54.540500   67149 cri.go:89] found id: ""
	I1028 18:31:54.540527   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.540537   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:54.540544   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:54.540601   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:54.573399   67149 cri.go:89] found id: ""
	I1028 18:31:54.573428   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.573438   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:54.573448   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:54.573509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:54.606227   67149 cri.go:89] found id: ""
	I1028 18:31:54.606262   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.606272   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:54.606278   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:54.606338   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:54.641143   67149 cri.go:89] found id: ""
	I1028 18:31:54.641163   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.641172   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:54.641179   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:54.641238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:54.674269   67149 cri.go:89] found id: ""
	I1028 18:31:54.674292   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.674300   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:54.674306   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:54.674352   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:54.707160   67149 cri.go:89] found id: ""
	I1028 18:31:54.707183   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.707191   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:54.707197   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:54.707242   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:54.746522   67149 cri.go:89] found id: ""
	I1028 18:31:54.746544   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.746552   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:54.746558   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:54.746613   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:54.779315   67149 cri.go:89] found id: ""
	I1028 18:31:54.779341   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.779348   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:54.779356   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:54.779367   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:54.830987   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:54.831017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:54.844846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:54.844871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:54.913540   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:54.913558   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:54.913568   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.994220   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:54.994250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:54.112785   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.114029   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.401657   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.402726   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.371756   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:59.372308   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.532820   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:57.545394   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:57.545454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:57.582329   67149 cri.go:89] found id: ""
	I1028 18:31:57.582355   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.582365   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:57.582372   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:57.582438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:57.616082   67149 cri.go:89] found id: ""
	I1028 18:31:57.616107   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.616115   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:57.616123   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:57.616167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:57.650118   67149 cri.go:89] found id: ""
	I1028 18:31:57.650144   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.650153   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:57.650162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:57.650215   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:57.684801   67149 cri.go:89] found id: ""
	I1028 18:31:57.684823   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.684831   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:57.684839   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:57.684887   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:57.722396   67149 cri.go:89] found id: ""
	I1028 18:31:57.722423   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.722431   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:57.722437   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:57.722516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:57.759779   67149 cri.go:89] found id: ""
	I1028 18:31:57.759802   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.759809   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:57.759818   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:57.759861   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:57.793977   67149 cri.go:89] found id: ""
	I1028 18:31:57.794034   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.794045   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:57.794053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:57.794117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:57.831104   67149 cri.go:89] found id: ""
	I1028 18:31:57.831130   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.831140   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:57.831151   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:57.831164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:57.920155   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:57.920174   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:57.920184   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:57.999677   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:57.999709   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:58.036647   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:58.036673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:58.088299   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:58.088333   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.601832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:00.615434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:00.615491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:00.653344   67149 cri.go:89] found id: ""
	I1028 18:32:00.653372   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.653383   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:00.653390   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:00.653450   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:00.693086   67149 cri.go:89] found id: ""
	I1028 18:32:00.693111   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.693122   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:00.693130   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:00.693188   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:00.728129   67149 cri.go:89] found id: ""
	I1028 18:32:00.728157   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.728167   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:00.728181   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:00.728243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:00.760540   67149 cri.go:89] found id: ""
	I1028 18:32:00.760568   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.760579   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:00.760586   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:00.760654   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:00.796633   67149 cri.go:89] found id: ""
	I1028 18:32:00.796662   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.796672   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:00.796680   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:00.796740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:00.829924   67149 cri.go:89] found id: ""
	I1028 18:32:00.829954   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.829966   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:00.829974   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:00.830028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:00.861565   67149 cri.go:89] found id: ""
	I1028 18:32:00.861586   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.861593   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:00.861599   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:00.861655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:00.894129   67149 cri.go:89] found id: ""
	I1028 18:32:00.894154   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.894162   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:00.894169   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:00.894180   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.908303   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:00.908331   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:00.974521   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:00.974543   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:00.974557   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:58.612554   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.612655   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:58.901908   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.902851   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.872423   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.873235   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.048113   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:01.048140   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:01.086657   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:01.086731   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.639781   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:03.652239   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:03.652291   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:03.687098   67149 cri.go:89] found id: ""
	I1028 18:32:03.687120   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.687129   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:03.687135   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:03.687181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:03.722176   67149 cri.go:89] found id: ""
	I1028 18:32:03.722206   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.722217   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:03.722225   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:03.722282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:03.757489   67149 cri.go:89] found id: ""
	I1028 18:32:03.757512   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.757520   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:03.757526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:03.757571   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:03.795359   67149 cri.go:89] found id: ""
	I1028 18:32:03.795400   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.795411   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:03.795429   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:03.795489   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:03.830919   67149 cri.go:89] found id: ""
	I1028 18:32:03.830945   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.830953   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:03.830958   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:03.831008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:03.863396   67149 cri.go:89] found id: ""
	I1028 18:32:03.863425   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.863437   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:03.863445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:03.863516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:03.897085   67149 cri.go:89] found id: ""
	I1028 18:32:03.897112   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.897121   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:03.897128   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:03.897189   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:03.929439   67149 cri.go:89] found id: ""
	I1028 18:32:03.929467   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.929478   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:03.929487   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:03.929503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.982917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:03.982943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:03.996333   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:03.996355   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:04.062786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:04.062813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:04.062827   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:04.143988   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:04.144016   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:03.113499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.612544   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.620294   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.402246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.402730   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.904429   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.373120   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:08.871662   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.683977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:06.696605   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:06.696680   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:06.733031   67149 cri.go:89] found id: ""
	I1028 18:32:06.733060   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.733070   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:06.733078   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:06.733138   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:06.769196   67149 cri.go:89] found id: ""
	I1028 18:32:06.769218   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.769225   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:06.769231   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:06.769280   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:06.806938   67149 cri.go:89] found id: ""
	I1028 18:32:06.806959   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.806966   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:06.806972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:06.807017   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:06.839506   67149 cri.go:89] found id: ""
	I1028 18:32:06.839528   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.839537   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:06.839542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:06.839587   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:06.878275   67149 cri.go:89] found id: ""
	I1028 18:32:06.878300   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.878309   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:06.878317   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:06.878382   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:06.916336   67149 cri.go:89] found id: ""
	I1028 18:32:06.916366   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.916374   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:06.916381   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:06.916434   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:06.971413   67149 cri.go:89] found id: ""
	I1028 18:32:06.971435   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.971443   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:06.971449   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:06.971494   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:07.004432   67149 cri.go:89] found id: ""
	I1028 18:32:07.004464   67149 logs.go:282] 0 containers: []
	W1028 18:32:07.004485   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:07.004496   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:07.004509   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:07.081741   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:07.081780   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:07.122022   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:07.122053   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:07.169470   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:07.169496   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:07.183433   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:07.183459   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:07.251765   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:09.752773   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:09.766042   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:09.766119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:09.802881   67149 cri.go:89] found id: ""
	I1028 18:32:09.802911   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.802923   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:09.802930   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:09.802987   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:09.840269   67149 cri.go:89] found id: ""
	I1028 18:32:09.840292   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.840300   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:09.840305   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:09.840370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:09.874654   67149 cri.go:89] found id: ""
	I1028 18:32:09.874679   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.874689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:09.874696   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:09.874752   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:09.910328   67149 cri.go:89] found id: ""
	I1028 18:32:09.910350   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.910358   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:09.910365   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:09.910425   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:09.942717   67149 cri.go:89] found id: ""
	I1028 18:32:09.942744   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.942752   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:09.942757   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:09.942814   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:09.975644   67149 cri.go:89] found id: ""
	I1028 18:32:09.975674   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.975685   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:09.975692   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:09.975750   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:10.008257   67149 cri.go:89] found id: ""
	I1028 18:32:10.008294   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.008305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:10.008313   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:10.008373   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:10.041678   67149 cri.go:89] found id: ""
	I1028 18:32:10.041705   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.041716   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:10.041726   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:10.041739   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:10.090474   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:10.090503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:10.103846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:10.103874   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:10.172819   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:10.172847   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:10.172862   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:10.251927   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:10.251955   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:10.112553   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.113090   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:10.401890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.902888   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:11.371860   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:13.373112   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.795985   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:12.810859   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:12.810921   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:12.849897   67149 cri.go:89] found id: ""
	I1028 18:32:12.849925   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.849934   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:12.849940   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:12.850003   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:12.883007   67149 cri.go:89] found id: ""
	I1028 18:32:12.883034   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.883045   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:12.883052   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:12.883111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:12.917458   67149 cri.go:89] found id: ""
	I1028 18:32:12.917485   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.917496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:12.917503   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:12.917561   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:12.950531   67149 cri.go:89] found id: ""
	I1028 18:32:12.950558   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.950568   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:12.950576   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:12.950631   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:12.983902   67149 cri.go:89] found id: ""
	I1028 18:32:12.983929   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.983937   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:12.983943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:12.983986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:13.017486   67149 cri.go:89] found id: ""
	I1028 18:32:13.017513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.017521   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:13.017526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:13.017582   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:13.050553   67149 cri.go:89] found id: ""
	I1028 18:32:13.050582   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.050594   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:13.050601   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:13.050658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:13.083489   67149 cri.go:89] found id: ""
	I1028 18:32:13.083513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.083520   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:13.083528   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:13.083537   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:13.137451   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:13.137482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:13.153154   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:13.153179   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:13.221043   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:13.221066   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:13.221080   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:13.299930   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:13.299960   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:15.850484   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:15.862930   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:15.862982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:15.895625   67149 cri.go:89] found id: ""
	I1028 18:32:15.895643   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.895651   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:15.895657   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:15.895701   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:15.928073   67149 cri.go:89] found id: ""
	I1028 18:32:15.928103   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.928113   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:15.928120   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:15.928180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:15.962261   67149 cri.go:89] found id: ""
	I1028 18:32:15.962282   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.962290   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:15.962295   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:15.962342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:15.999177   67149 cri.go:89] found id: ""
	I1028 18:32:15.999206   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.999216   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:15.999224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:15.999282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:16.033098   67149 cri.go:89] found id: ""
	I1028 18:32:16.033126   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.033138   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:16.033145   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:16.033208   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:14.612739   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.112266   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.401576   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.401773   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:18.372059   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:16.067049   67149 cri.go:89] found id: ""
	I1028 18:32:16.067071   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.067083   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:16.067089   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:16.067145   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:16.106936   67149 cri.go:89] found id: ""
	I1028 18:32:16.106970   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.106981   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:16.106988   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:16.107044   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:16.141702   67149 cri.go:89] found id: ""
	I1028 18:32:16.141729   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.141741   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:16.141751   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:16.141762   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:16.178772   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:16.178803   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:16.230851   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:16.230878   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:16.244489   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:16.244514   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:16.319362   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:16.319389   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:16.319405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:18.899694   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:18.913287   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:18.913358   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:18.954136   67149 cri.go:89] found id: ""
	I1028 18:32:18.954158   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.954165   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:18.954170   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:18.954218   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:18.987427   67149 cri.go:89] found id: ""
	I1028 18:32:18.987449   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.987457   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:18.987462   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:18.987505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:19.022067   67149 cri.go:89] found id: ""
	I1028 18:32:19.022099   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.022110   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:19.022118   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:19.022167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:19.054533   67149 cri.go:89] found id: ""
	I1028 18:32:19.054560   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.054570   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:19.054578   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:19.054644   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:19.099324   67149 cri.go:89] found id: ""
	I1028 18:32:19.099356   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.099367   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:19.099375   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:19.099436   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:19.146437   67149 cri.go:89] found id: ""
	I1028 18:32:19.146463   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.146470   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:19.146478   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:19.146540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:19.192027   67149 cri.go:89] found id: ""
	I1028 18:32:19.192053   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.192070   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:19.192078   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:19.192140   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:19.228411   67149 cri.go:89] found id: ""
	I1028 18:32:19.228437   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.228447   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:19.228457   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:19.228480   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:19.313151   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:19.313183   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:19.352117   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:19.352142   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:19.402772   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:19.402805   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:19.416148   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:19.416167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:19.483098   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:19.112720   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.611924   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:19.403635   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.902116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:20.872280   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:22.872726   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.983420   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:21.997129   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:21.997180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:22.035600   67149 cri.go:89] found id: ""
	I1028 18:32:22.035622   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.035631   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:22.035637   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:22.035684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:22.073413   67149 cri.go:89] found id: ""
	I1028 18:32:22.073440   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.073450   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:22.073458   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:22.073505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:22.108637   67149 cri.go:89] found id: ""
	I1028 18:32:22.108663   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.108673   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:22.108682   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:22.108740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:22.145837   67149 cri.go:89] found id: ""
	I1028 18:32:22.145860   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.145867   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:22.145873   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:22.145928   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:22.183830   67149 cri.go:89] found id: ""
	I1028 18:32:22.183855   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.183864   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:22.183869   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:22.183917   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:22.221402   67149 cri.go:89] found id: ""
	I1028 18:32:22.221423   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.221430   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:22.221436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:22.221484   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:22.262193   67149 cri.go:89] found id: ""
	I1028 18:32:22.262220   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.262229   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:22.262234   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:22.262297   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:22.298774   67149 cri.go:89] found id: ""
	I1028 18:32:22.298797   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.298808   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:22.298819   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:22.298831   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:22.348677   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:22.348716   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:22.362199   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:22.362220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:22.429304   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:22.429327   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:22.429345   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:22.511591   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:22.511623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.049119   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:25.063910   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:25.063970   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:25.099795   67149 cri.go:89] found id: ""
	I1028 18:32:25.099822   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.099833   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:25.099840   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:25.099898   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:25.137957   67149 cri.go:89] found id: ""
	I1028 18:32:25.137985   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.137995   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:25.138002   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:25.138063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:25.174687   67149 cri.go:89] found id: ""
	I1028 18:32:25.174715   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.174726   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:25.174733   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:25.174795   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:25.207039   67149 cri.go:89] found id: ""
	I1028 18:32:25.207067   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.207077   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:25.207084   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:25.207130   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:25.239961   67149 cri.go:89] found id: ""
	I1028 18:32:25.239990   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.239998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:25.240004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:25.240055   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:25.273823   67149 cri.go:89] found id: ""
	I1028 18:32:25.273848   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.273858   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:25.273865   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:25.273925   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:25.310725   67149 cri.go:89] found id: ""
	I1028 18:32:25.310754   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.310765   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:25.310772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:25.310830   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:25.348724   67149 cri.go:89] found id: ""
	I1028 18:32:25.348749   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.348760   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:25.348770   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:25.348784   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:25.430213   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:25.430243   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.472233   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:25.472263   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:25.525648   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:25.525676   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:25.538697   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:25.538721   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:25.606779   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:23.612901   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.112494   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:23.902733   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.402271   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:25.372428   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:27.870461   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:29.871824   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.107877   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:28.122241   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:28.122296   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:28.157042   67149 cri.go:89] found id: ""
	I1028 18:32:28.157070   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.157082   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:28.157089   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:28.157142   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:28.190625   67149 cri.go:89] found id: ""
	I1028 18:32:28.190648   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.190658   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:28.190666   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:28.190724   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:28.224528   67149 cri.go:89] found id: ""
	I1028 18:32:28.224551   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.224559   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:28.224565   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:28.224609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:28.265073   67149 cri.go:89] found id: ""
	I1028 18:32:28.265100   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.265110   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:28.265116   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:28.265174   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:28.302598   67149 cri.go:89] found id: ""
	I1028 18:32:28.302623   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.302633   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:28.302640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:28.302697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:28.339757   67149 cri.go:89] found id: ""
	I1028 18:32:28.339781   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.339789   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:28.339794   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:28.339846   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:28.375185   67149 cri.go:89] found id: ""
	I1028 18:32:28.375213   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.375224   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:28.375231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:28.375294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:28.413292   67149 cri.go:89] found id: ""
	I1028 18:32:28.413316   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.413334   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:28.413344   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:28.413376   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:28.464069   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:28.464098   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:28.478275   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:28.478299   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:28.546483   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.546504   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:28.546515   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:28.623015   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:28.623041   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:28.613303   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.111518   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.403789   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:30.903113   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:32.371951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:34.372820   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.161570   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:31.175056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:31.175119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:31.210163   67149 cri.go:89] found id: ""
	I1028 18:32:31.210187   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.210199   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:31.210207   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:31.210264   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:31.244605   67149 cri.go:89] found id: ""
	I1028 18:32:31.244630   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.244637   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:31.244643   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:31.244688   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:31.280793   67149 cri.go:89] found id: ""
	I1028 18:32:31.280818   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.280827   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:31.280833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:31.280890   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:31.314616   67149 cri.go:89] found id: ""
	I1028 18:32:31.314641   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.314649   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:31.314654   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:31.314709   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:31.349386   67149 cri.go:89] found id: ""
	I1028 18:32:31.349410   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.349417   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:31.349423   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:31.349469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:31.382831   67149 cri.go:89] found id: ""
	I1028 18:32:31.382861   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.382871   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:31.382879   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:31.382924   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:31.417365   67149 cri.go:89] found id: ""
	I1028 18:32:31.417391   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.417400   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:31.417410   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:31.417469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:31.450631   67149 cri.go:89] found id: ""
	I1028 18:32:31.450660   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.450672   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:31.450683   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:31.450697   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.488932   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:31.488959   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:31.539335   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:31.539361   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:31.552304   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:31.552328   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:31.629291   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:31.629308   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:31.629323   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.207517   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:34.221231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:34.221310   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:34.255342   67149 cri.go:89] found id: ""
	I1028 18:32:34.255365   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.255373   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:34.255379   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:34.255438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:34.303802   67149 cri.go:89] found id: ""
	I1028 18:32:34.303827   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.303836   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:34.303843   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:34.303896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:34.339531   67149 cri.go:89] found id: ""
	I1028 18:32:34.339568   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.339579   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:34.339589   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:34.339653   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:34.374063   67149 cri.go:89] found id: ""
	I1028 18:32:34.374084   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.374094   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:34.374102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:34.374155   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:34.410880   67149 cri.go:89] found id: ""
	I1028 18:32:34.410909   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.410918   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:34.410924   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:34.410971   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:34.445372   67149 cri.go:89] found id: ""
	I1028 18:32:34.445397   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.445408   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:34.445416   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:34.445474   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:34.477820   67149 cri.go:89] found id: ""
	I1028 18:32:34.477844   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.477851   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:34.477857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:34.477909   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:34.517581   67149 cri.go:89] found id: ""
	I1028 18:32:34.517602   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.517609   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:34.517618   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:34.517632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:34.530407   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:34.530430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:34.599055   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:34.599083   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:34.599096   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.681579   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:34.681612   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:34.720523   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:34.720550   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:33.111858   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.112216   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.613521   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:33.401782   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.402544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.901848   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:36.871451   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.372642   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.272697   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:37.289091   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:37.289159   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:37.321600   67149 cri.go:89] found id: ""
	I1028 18:32:37.321628   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.321639   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:37.321647   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:37.321704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:37.353296   67149 cri.go:89] found id: ""
	I1028 18:32:37.353324   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.353337   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:37.353343   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:37.353400   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:37.386299   67149 cri.go:89] found id: ""
	I1028 18:32:37.386321   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.386328   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:37.386333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:37.386401   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:37.420992   67149 cri.go:89] found id: ""
	I1028 18:32:37.421026   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.421039   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:37.421047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:37.421117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:37.456174   67149 cri.go:89] found id: ""
	I1028 18:32:37.456206   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.456217   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:37.456224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:37.456284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:37.491796   67149 cri.go:89] found id: ""
	I1028 18:32:37.491819   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.491827   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:37.491833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:37.491878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:37.529002   67149 cri.go:89] found id: ""
	I1028 18:32:37.529028   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.529039   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:37.529047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:37.529111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:37.568967   67149 cri.go:89] found id: ""
	I1028 18:32:37.568993   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.569001   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:37.569010   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:37.569022   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:37.640041   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:37.640065   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:37.640076   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:37.725490   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:37.725524   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:37.771858   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:37.771879   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.821240   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:37.821271   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.334946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:40.349147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:40.349216   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:40.383931   67149 cri.go:89] found id: ""
	I1028 18:32:40.383956   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.383966   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:40.383973   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:40.384028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:40.419877   67149 cri.go:89] found id: ""
	I1028 18:32:40.419905   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.419915   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:40.419922   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:40.419978   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:40.453659   67149 cri.go:89] found id: ""
	I1028 18:32:40.453681   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.453689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:40.453695   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:40.453744   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:40.486299   67149 cri.go:89] found id: ""
	I1028 18:32:40.486326   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.486343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:40.486350   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:40.486407   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:40.518309   67149 cri.go:89] found id: ""
	I1028 18:32:40.518334   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.518344   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:40.518351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:40.518402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:40.549008   67149 cri.go:89] found id: ""
	I1028 18:32:40.549040   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.549049   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:40.549055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:40.549108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:40.586157   67149 cri.go:89] found id: ""
	I1028 18:32:40.586177   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.586184   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:40.586189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:40.586232   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:40.621107   67149 cri.go:89] found id: ""
	I1028 18:32:40.621133   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.621144   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:40.621153   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:40.621164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.633793   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:40.633816   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:40.700370   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:40.700393   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:40.700405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:40.780964   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:40.780993   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:40.819904   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:40.819928   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:40.112755   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:42.113116   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.903476   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.904639   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.872360   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.371399   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:43.371487   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:43.384387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:43.384445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:43.419889   67149 cri.go:89] found id: ""
	I1028 18:32:43.419922   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.419931   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:43.419937   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:43.419997   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:43.455177   67149 cri.go:89] found id: ""
	I1028 18:32:43.455209   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.455219   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:43.455227   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:43.455295   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:43.493070   67149 cri.go:89] found id: ""
	I1028 18:32:43.493094   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.493104   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:43.493111   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:43.493170   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:43.526164   67149 cri.go:89] found id: ""
	I1028 18:32:43.526191   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.526199   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:43.526205   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:43.526254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:43.559225   67149 cri.go:89] found id: ""
	I1028 18:32:43.559252   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.559263   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:43.559270   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:43.559323   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:43.597178   67149 cri.go:89] found id: ""
	I1028 18:32:43.597198   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.597206   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:43.597212   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:43.597276   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:43.633179   67149 cri.go:89] found id: ""
	I1028 18:32:43.633200   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.633209   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:43.633214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:43.633290   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:43.669567   67149 cri.go:89] found id: ""
	I1028 18:32:43.669596   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.669605   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:43.669615   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:43.669631   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:43.737618   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:43.737638   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:43.737650   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:43.821394   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:43.821425   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:43.859924   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:43.859950   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.913539   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:43.913566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:44.611539   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.613781   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.401399   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.401930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.371445   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.372075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.429021   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:46.443137   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:46.443197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:46.480363   67149 cri.go:89] found id: ""
	I1028 18:32:46.480385   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.480394   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:46.480400   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:46.480452   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:46.514702   67149 cri.go:89] found id: ""
	I1028 18:32:46.514731   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.514738   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:46.514744   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:46.514796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:46.546829   67149 cri.go:89] found id: ""
	I1028 18:32:46.546857   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.546868   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:46.546874   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:46.546920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:46.580372   67149 cri.go:89] found id: ""
	I1028 18:32:46.580398   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.580407   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:46.580415   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:46.580491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:46.615455   67149 cri.go:89] found id: ""
	I1028 18:32:46.615479   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.615489   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:46.615497   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:46.615556   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:46.649547   67149 cri.go:89] found id: ""
	I1028 18:32:46.649570   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.649577   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:46.649583   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:46.649641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:46.684744   67149 cri.go:89] found id: ""
	I1028 18:32:46.684768   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.684779   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:46.684787   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:46.684852   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:46.725530   67149 cri.go:89] found id: ""
	I1028 18:32:46.725558   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.725569   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:46.725578   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:46.725592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:46.794487   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:46.794506   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:46.794517   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:46.881407   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:46.881438   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:46.921649   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:46.921671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:46.972915   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:46.972947   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.486835   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:49.501445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:49.501509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:49.537356   67149 cri.go:89] found id: ""
	I1028 18:32:49.537377   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.537384   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:49.537389   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:49.537443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:49.568514   67149 cri.go:89] found id: ""
	I1028 18:32:49.568541   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.568549   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:49.568555   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:49.568610   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:49.602300   67149 cri.go:89] found id: ""
	I1028 18:32:49.602324   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.602333   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:49.602342   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:49.602390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:49.640326   67149 cri.go:89] found id: ""
	I1028 18:32:49.640356   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.640366   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:49.640376   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:49.640437   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:49.675145   67149 cri.go:89] found id: ""
	I1028 18:32:49.675175   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.675183   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:49.675189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:49.675235   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:49.711104   67149 cri.go:89] found id: ""
	I1028 18:32:49.711129   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.711139   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:49.711147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:49.711206   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:49.748316   67149 cri.go:89] found id: ""
	I1028 18:32:49.748366   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.748378   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:49.748385   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:49.748441   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:49.781620   67149 cri.go:89] found id: ""
	I1028 18:32:49.781646   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.781656   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:49.781665   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:49.781679   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.795119   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:49.795143   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:49.870438   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:49.870519   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:49.870539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:49.956845   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:49.956875   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:49.993067   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:49.993097   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:49.112102   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:51.612691   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.901950   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.902354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.903627   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.871412   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.871499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:54.874588   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.543260   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:52.556524   67149 kubeadm.go:597] duration metric: took 4m2.404527005s to restartPrimaryControlPlane
	W1028 18:32:52.556602   67149 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:52.556639   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:32:53.011065   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:32:53.026226   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:32:53.035868   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:32:53.045257   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:32:53.045271   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:32:53.045302   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:32:53.054383   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:32:53.054430   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:32:53.063665   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:32:53.073006   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:32:53.073054   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:32:53.083156   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.092700   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:32:53.092742   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.102374   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:32:53.112072   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:32:53.112121   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:32:53.122102   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:32:53.347625   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:32:53.613118   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:56.111841   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:55.402354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.902406   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.371909   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:59.872630   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.112962   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:00.613499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.896006   66801 pod_ready.go:82] duration metric: took 4m0.00005957s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	E1028 18:32:58.896033   66801 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:32:58.896052   66801 pod_ready.go:39] duration metric: took 4m13.055181811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:32:58.896092   66801 kubeadm.go:597] duration metric: took 4m21.540757653s to restartPrimaryControlPlane
	W1028 18:32:58.896147   66801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:58.896173   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:02.372443   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:04.871981   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:03.113038   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:05.114488   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:07.612365   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:06.872705   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.371018   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.612856   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:12.114228   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:11.371831   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:13.372636   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:14.613213   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.113328   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:15.871907   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.872203   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:19.612892   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:21.613052   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:20.370964   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:22.371880   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:24.372718   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:25.039296   66801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.14309835s)
	I1028 18:33:25.039378   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:25.056172   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:25.066775   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:25.077717   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:25.077734   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:25.077770   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:33:25.086924   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:25.086968   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:25.096867   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:33:25.106162   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:25.106205   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:25.117015   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.126191   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:25.126245   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.135691   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:33:25.144827   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:25.144867   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:25.153834   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:25.201789   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:33:25.201866   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:33:25.306568   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:33:25.306717   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:33:25.306845   66801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:33:25.314339   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:33:25.316173   66801 out.go:235]   - Generating certificates and keys ...
	I1028 18:33:25.316271   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:33:25.316345   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:33:25.316463   66801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:33:25.316571   66801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:33:25.316688   66801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:33:25.316768   66801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:33:25.316857   66801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:33:25.316943   66801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:33:25.317047   66801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:33:25.317149   66801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:33:25.317209   66801 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:33:25.317299   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:33:25.643056   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:33:25.723345   66801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:33:25.831628   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:33:25.908255   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:33:26.215149   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:33:26.215654   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:33:26.218291   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:33:24.111834   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.113295   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.220065   66801 out.go:235]   - Booting up control plane ...
	I1028 18:33:26.220170   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:33:26.220251   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:33:26.220336   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:33:26.239633   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:33:26.245543   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:33:26.245612   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:33:26.378154   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:33:26.378332   66801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:33:26.879957   66801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.937575ms
	I1028 18:33:26.880090   66801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:33:26.365771   67489 pod_ready.go:82] duration metric: took 4m0.000286415s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:26.365796   67489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:26.365812   67489 pod_ready.go:39] duration metric: took 4m12.539631154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:26.365837   67489 kubeadm.go:597] duration metric: took 4m19.835720994s to restartPrimaryControlPlane
	W1028 18:33:26.365884   67489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:26.365910   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:31.882091   66801 kubeadm.go:310] [api-check] The API server is healthy after 5.002114527s
	I1028 18:33:31.897915   66801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:33:31.914311   66801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:33:31.943604   66801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:33:31.943859   66801 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-051152 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:33:31.954350   66801 kubeadm.go:310] [bootstrap-token] Using token: h7eyzq.87sgylc03ke6zhfy
	I1028 18:33:28.613480   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.113034   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.955444   66801 out.go:235]   - Configuring RBAC rules ...
	I1028 18:33:31.955591   66801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:33:31.960749   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:33:31.967695   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:33:31.970863   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:33:31.973924   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:33:31.979191   66801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:33:32.291512   66801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:33:32.714999   66801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:33:33.291889   66801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:33:33.293069   66801 kubeadm.go:310] 
	I1028 18:33:33.293167   66801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:33:33.293182   66801 kubeadm.go:310] 
	I1028 18:33:33.293255   66801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:33:33.293268   66801 kubeadm.go:310] 
	I1028 18:33:33.293307   66801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:33:33.293372   66801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:33:33.293435   66801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:33:33.293447   66801 kubeadm.go:310] 
	I1028 18:33:33.293518   66801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:33:33.293526   66801 kubeadm.go:310] 
	I1028 18:33:33.293595   66801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:33:33.293624   66801 kubeadm.go:310] 
	I1028 18:33:33.293712   66801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:33:33.293842   66801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:33:33.293946   66801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:33:33.293960   66801 kubeadm.go:310] 
	I1028 18:33:33.294117   66801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:33:33.294196   66801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:33:33.294203   66801 kubeadm.go:310] 
	I1028 18:33:33.294276   66801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294385   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:33:33.294414   66801 kubeadm.go:310] 	--control-plane 
	I1028 18:33:33.294427   66801 kubeadm.go:310] 
	I1028 18:33:33.294515   66801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:33:33.294525   66801 kubeadm.go:310] 
	I1028 18:33:33.294629   66801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294774   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:33:33.295715   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:33:33.295839   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:33:33.295852   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:33:33.297447   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:33:33.298607   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:33:33.311113   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:33:33.329576   66801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:33:33.329634   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:33.329680   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-051152 minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=no-preload-051152 minikube.k8s.io/primary=true
	I1028 18:33:33.355186   66801 ops.go:34] apiserver oom_adj: -16
	I1028 18:33:33.509281   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.009672   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.509515   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.010084   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.509359   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.009689   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.509671   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.009884   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.510004   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.615853   66801 kubeadm.go:1113] duration metric: took 4.286272328s to wait for elevateKubeSystemPrivileges
	I1028 18:33:37.615890   66801 kubeadm.go:394] duration metric: took 5m0.313982235s to StartCluster
	I1028 18:33:37.615913   66801 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.616000   66801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:33:37.618418   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.618741   66801 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:33:37.618857   66801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:33:37.618951   66801 addons.go:69] Setting storage-provisioner=true in profile "no-preload-051152"
	I1028 18:33:37.618963   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:33:37.618975   66801 addons.go:69] Setting default-storageclass=true in profile "no-preload-051152"
	I1028 18:33:37.619001   66801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-051152"
	I1028 18:33:37.618973   66801 addons.go:234] Setting addon storage-provisioner=true in "no-preload-051152"
	W1028 18:33:37.619019   66801 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:33:37.619012   66801 addons.go:69] Setting metrics-server=true in profile "no-preload-051152"
	I1028 18:33:37.619043   66801 addons.go:234] Setting addon metrics-server=true in "no-preload-051152"
	I1028 18:33:37.619047   66801 host.go:66] Checking if "no-preload-051152" exists ...
	W1028 18:33:37.619056   66801 addons.go:243] addon metrics-server should already be in state true
	I1028 18:33:37.619097   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.619417   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619446   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619472   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619488   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619487   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619521   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.620738   66801 out.go:177] * Verifying Kubernetes components...
	I1028 18:33:37.622165   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:33:37.636006   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1028 18:33:37.636285   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I1028 18:33:37.636536   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.636621   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.637055   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637082   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637344   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637368   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637419   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637634   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637811   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.638112   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.638157   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.638738   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1028 18:33:37.639176   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.639609   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.639632   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.639918   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.640333   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.640375   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.641571   66801 addons.go:234] Setting addon default-storageclass=true in "no-preload-051152"
	W1028 18:33:37.641592   66801 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:33:37.641620   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.641947   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.641981   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.657758   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I1028 18:33:37.657834   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I1028 18:33:37.657942   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I1028 18:33:37.658187   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658335   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658739   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658752   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658877   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658896   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658931   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.659309   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659358   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659409   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.659428   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.659552   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.659934   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.659964   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.660163   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.660406   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.661568   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.662429   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.663435   66801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:33:37.664414   66801 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:33:33.613699   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:36.111831   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:37.665306   66801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.665324   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:33:37.665343   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.666055   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:33:37.666073   66801 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:33:37.666092   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.668918   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669385   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669519   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.669543   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669754   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.669942   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.670093   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.670266   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.670513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.670556   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.670719   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.670851   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.671014   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.671115   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.677419   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I1028 18:33:37.677828   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.678184   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.678201   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.678476   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.678686   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.680177   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.680403   66801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.680420   66801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:33:37.680437   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.683981   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.684534   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.685007   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.685153   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.685307   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.832104   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:33:37.859406   66801 node_ready.go:35] waiting up to 6m0s for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873437   66801 node_ready.go:49] node "no-preload-051152" has status "Ready":"True"
	I1028 18:33:37.873460   66801 node_ready.go:38] duration metric: took 14.023686ms for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873470   66801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:37.888286   66801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:37.917341   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:33:37.917363   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:33:37.948690   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:33:37.948716   66801 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:33:37.967948   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.971737   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.998758   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:37.998782   66801 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:33:38.034907   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:38.924695   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924720   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.924762   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924828   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925048   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925079   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925093   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925105   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925128   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925131   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925142   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925153   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925154   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925164   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925372   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925397   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925382   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926852   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926857   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.926872   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.955462   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.955492   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.955858   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.955938   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.955953   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373144   66801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.338192413s)
	I1028 18:33:39.373209   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373224   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373512   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373529   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373537   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373544   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373761   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373775   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373785   66801 addons.go:475] Verifying addon metrics-server=true in "no-preload-051152"
	I1028 18:33:39.375584   66801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:33:38.113078   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:40.612141   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.612763   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:39.377031   66801 addons.go:510] duration metric: took 1.758176418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:33:39.906691   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.396083   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:44.894264   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:46.396937   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.397023   66801 pod_ready.go:82] duration metric: took 8.508709164s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.397048   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402560   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.402579   66801 pod_ready.go:82] duration metric: took 5.5155ms for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402588   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406630   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.406646   66801 pod_ready.go:82] duration metric: took 4.052513ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406654   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411238   66801 pod_ready.go:93] pod "kube-proxy-28qht" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.411253   66801 pod_ready.go:82] duration metric: took 4.592983ms for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411260   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414867   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.414880   66801 pod_ready.go:82] duration metric: took 3.615132ms for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414886   66801 pod_ready.go:39] duration metric: took 8.541406133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:46.414900   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:33:46.414943   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:33:46.430889   66801 api_server.go:72] duration metric: took 8.81211088s to wait for apiserver process to appear ...
	I1028 18:33:46.430907   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:33:46.430925   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:33:46.435248   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:33:46.435963   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:33:46.435978   66801 api_server.go:131] duration metric: took 5.065719ms to wait for apiserver health ...
	I1028 18:33:46.435984   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:33:46.596186   66801 system_pods.go:59] 9 kube-system pods found
	I1028 18:33:46.596222   66801 system_pods.go:61] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.596230   66801 system_pods.go:61] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.596234   66801 system_pods.go:61] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.596238   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.596242   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.596246   66801 system_pods.go:61] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.596252   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.596301   66801 system_pods.go:61] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.596317   66801 system_pods.go:61] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.596324   66801 system_pods.go:74] duration metric: took 160.335823ms to wait for pod list to return data ...
	I1028 18:33:46.596341   66801 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:33:46.793115   66801 default_sa.go:45] found service account: "default"
	I1028 18:33:46.793147   66801 default_sa.go:55] duration metric: took 196.795286ms for default service account to be created ...
	I1028 18:33:46.793157   66801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:33:46.995868   66801 system_pods.go:86] 9 kube-system pods found
	I1028 18:33:46.995899   66801 system_pods.go:89] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.995905   66801 system_pods.go:89] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.995909   66801 system_pods.go:89] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.995912   66801 system_pods.go:89] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.995917   66801 system_pods.go:89] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.995920   66801 system_pods.go:89] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.995924   66801 system_pods.go:89] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.995929   66801 system_pods.go:89] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.995934   66801 system_pods.go:89] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.995941   66801 system_pods.go:126] duration metric: took 202.778451ms to wait for k8s-apps to be running ...
	I1028 18:33:46.995946   66801 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:33:46.995990   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:47.011260   66801 system_svc.go:56] duration metric: took 15.302599ms WaitForService to wait for kubelet
	I1028 18:33:47.011285   66801 kubeadm.go:582] duration metric: took 9.392510785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:33:47.011303   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:33:47.193217   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:33:47.193239   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:33:47.193250   66801 node_conditions.go:105] duration metric: took 181.942948ms to run NodePressure ...
	I1028 18:33:47.193261   66801 start.go:241] waiting for startup goroutines ...
	I1028 18:33:47.193267   66801 start.go:246] waiting for cluster config update ...
	I1028 18:33:47.193278   66801 start.go:255] writing updated cluster config ...
	I1028 18:33:47.193529   66801 ssh_runner.go:195] Run: rm -f paused
	I1028 18:33:47.240247   66801 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:33:47.242139   66801 out.go:177] * Done! kubectl is now configured to use "no-preload-051152" cluster and "default" namespace by default
	I1028 18:33:45.112037   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:47.112764   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:48.107354   66600 pod_ready.go:82] duration metric: took 4m0.001062902s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:48.107377   66600 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:48.107395   66600 pod_ready.go:39] duration metric: took 4m13.535788316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:48.107420   66600 kubeadm.go:597] duration metric: took 4m22.316644235s to restartPrimaryControlPlane
	W1028 18:33:48.107467   66600 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:48.107490   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:52.667497   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.301566887s)
	I1028 18:33:52.667559   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:52.683580   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:52.695334   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:52.705505   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:52.705524   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:52.705569   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:33:52.714922   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:52.714969   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:52.724156   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:33:52.733125   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:52.733161   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:52.742369   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.751021   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:52.751065   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.760543   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:33:52.770939   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:52.770985   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:52.781890   67489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:52.961562   67489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:01.798408   67489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:01.798470   67489 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:01.798580   67489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:01.798724   67489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:01.798811   67489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:01.798882   67489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:01.800228   67489 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:01.800320   67489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:01.800392   67489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:01.800486   67489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:01.800580   67489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:01.800641   67489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:01.800694   67489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:01.800764   67489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:01.800842   67489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:01.800955   67489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:01.801019   67489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:01.801053   67489 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:01.801102   67489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:01.801145   67489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:01.801196   67489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:01.801252   67489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:01.801316   67489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:01.801409   67489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:01.801513   67489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:01.801605   67489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:01.802967   67489 out.go:235]   - Booting up control plane ...
	I1028 18:34:01.803061   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:01.803169   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:01.803254   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:01.803376   67489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:01.803488   67489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:01.803558   67489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:01.803685   67489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:01.803800   67489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:01.803869   67489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.148945ms
	I1028 18:34:01.803933   67489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:01.803986   67489 kubeadm.go:310] [api-check] The API server is healthy after 5.003798359s
	I1028 18:34:01.804081   67489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:01.804187   67489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:01.804240   67489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:01.804438   67489 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-692033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:01.804533   67489 kubeadm.go:310] [bootstrap-token] Using token: wy8zqj.38m6tcr6hp7sgzod
	I1028 18:34:01.805760   67489 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:01.805856   67489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:01.805949   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:01.806108   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:01.806233   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:01.806378   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:01.806464   67489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:01.806579   67489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:01.806633   67489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:01.806673   67489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:01.806679   67489 kubeadm.go:310] 
	I1028 18:34:01.806735   67489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:01.806746   67489 kubeadm.go:310] 
	I1028 18:34:01.806836   67489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:01.806844   67489 kubeadm.go:310] 
	I1028 18:34:01.806880   67489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:01.806957   67489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:01.807001   67489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:01.807007   67489 kubeadm.go:310] 
	I1028 18:34:01.807060   67489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:01.807071   67489 kubeadm.go:310] 
	I1028 18:34:01.807112   67489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:01.807118   67489 kubeadm.go:310] 
	I1028 18:34:01.807171   67489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:01.807246   67489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:01.807307   67489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:01.807313   67489 kubeadm.go:310] 
	I1028 18:34:01.807387   67489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:01.807454   67489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:01.807465   67489 kubeadm.go:310] 
	I1028 18:34:01.807538   67489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807634   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:01.807655   67489 kubeadm.go:310] 	--control-plane 
	I1028 18:34:01.807661   67489 kubeadm.go:310] 
	I1028 18:34:01.807730   67489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:01.807739   67489 kubeadm.go:310] 
	I1028 18:34:01.807810   67489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807913   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:01.807923   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:34:01.807929   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:01.809168   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:01.810293   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:01.822030   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:01.842831   67489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:01.842908   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:01.842963   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-692033 minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=default-k8s-diff-port-692033 minikube.k8s.io/primary=true
	I1028 18:34:01.875265   67489 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:02.050422   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:02.550824   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.050477   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.551245   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.051177   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.550572   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.051071   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.550926   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.638447   67489 kubeadm.go:1113] duration metric: took 3.795598924s to wait for elevateKubeSystemPrivileges
	I1028 18:34:05.638483   67489 kubeadm.go:394] duration metric: took 4m59.162037455s to StartCluster
	I1028 18:34:05.638504   67489 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.638591   67489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:05.641196   67489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.641497   67489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:05.641626   67489 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:05.641720   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:05.641730   67489 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641748   67489 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641760   67489 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:05.641776   67489 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641781   67489 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641792   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.641794   67489 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641803   67489 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:05.641804   67489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-692033"
	I1028 18:34:05.641832   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.642210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642217   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642229   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642245   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642255   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642314   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642905   67489 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:05.644361   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:05.658478   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I1028 18:34:05.658586   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1028 18:34:05.659040   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659044   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659524   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659546   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659701   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659724   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659879   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660044   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660111   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.660610   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.660648   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.661748   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1028 18:34:05.662150   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.662607   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.662627   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.662983   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.662991   67489 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.663006   67489 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:05.663029   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.663294   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663334   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.663531   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663572   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.675955   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1028 18:34:05.676345   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.676784   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.676802   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.677154   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.677358   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.678723   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I1028 18:34:05.678897   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1028 18:34:05.679025   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.679243   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679337   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679700   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679715   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.679805   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679823   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.680500   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680506   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680706   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.680834   67489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:05.681042   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.681070   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.681982   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:05.682005   67489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:05.682035   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.682363   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.683806   67489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:05.684992   67489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.685011   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:05.685029   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.686903   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.686957   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.686973   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.687218   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.687429   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.687693   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.687850   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.688516   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.688908   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.688933   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.689193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.689372   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.689513   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.689655   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.696743   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I1028 18:34:05.697029   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.697432   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.697458   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.697697   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.697843   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.699192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.699397   67489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.699405   67489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:05.699416   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.702897   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.703368   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703483   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.703667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.703841   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.703996   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.838049   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:05.857829   67489 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866141   67489 node_ready.go:49] node "default-k8s-diff-port-692033" has status "Ready":"True"
	I1028 18:34:05.866158   67489 node_ready.go:38] duration metric: took 8.296617ms for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866167   67489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:05.873027   67489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:05.927585   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:05.927608   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:05.928743   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.946390   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.961712   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:05.961734   67489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:05.993688   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:05.993711   67489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:06.097871   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:06.696189   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696226   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696195   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696300   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696696   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696713   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696697   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696721   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696735   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696742   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696750   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696722   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696794   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696984   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697000   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.697027   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697036   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.720324   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.720346   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.720649   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.720668   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262166   67489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.164245646s)
	I1028 18:34:07.262256   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262277   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262587   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262608   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262607   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262616   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262625   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262890   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262923   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262936   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262948   67489 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-692033"
	I1028 18:34:07.264414   67489 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:07.265449   67489 addons.go:510] duration metric: took 1.623834435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:07.882264   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.313629   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.206119005s)
	I1028 18:34:14.313702   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:14.329212   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:34:14.339407   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:14.349645   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:14.349669   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:14.349716   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:14.359332   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:14.359384   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:14.369627   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:14.381040   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:14.381098   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:14.390359   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.399743   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:14.399783   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.408932   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:14.417840   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:14.417876   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:14.427234   66600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:14.472502   66600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:14.472593   66600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:14.578311   66600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:14.578456   66600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:14.578576   66600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:14.586748   66600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:10.380304   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:12.878632   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.878951   67489 pod_ready.go:93] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:14.878974   67489 pod_ready.go:82] duration metric: took 9.005915421s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:14.878983   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385215   67489 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.385239   67489 pod_ready.go:82] duration metric: took 506.249352ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385250   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390412   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.390435   67489 pod_ready.go:82] duration metric: took 5.177559ms for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390448   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395252   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.395272   67489 pod_ready.go:82] duration metric: took 4.816812ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395281   67489 pod_ready.go:39] duration metric: took 9.52910413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:15.395298   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:15.395349   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:15.413693   67489 api_server.go:72] duration metric: took 9.772160727s to wait for apiserver process to appear ...
	I1028 18:34:15.413715   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:15.413734   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:34:15.417780   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:34:15.418688   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:15.418712   67489 api_server.go:131] duration metric: took 4.989226ms to wait for apiserver health ...
	I1028 18:34:15.418720   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:15.424285   67489 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:15.424306   67489 system_pods.go:61] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.424310   67489 system_pods.go:61] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.424315   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.424318   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.424323   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.424327   67489 system_pods.go:61] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.424331   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.424337   67489 system_pods.go:61] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.424344   67489 system_pods.go:61] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.424351   67489 system_pods.go:74] duration metric: took 5.625205ms to wait for pod list to return data ...
	I1028 18:34:15.424359   67489 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:15.427132   67489 default_sa.go:45] found service account: "default"
	I1028 18:34:15.427153   67489 default_sa.go:55] duration metric: took 2.788005ms for default service account to be created ...
	I1028 18:34:15.427161   67489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:15.479404   67489 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:15.479427   67489 system_pods.go:89] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.479433   67489 system_pods.go:89] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.479436   67489 system_pods.go:89] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.479443   67489 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.479448   67489 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.479453   67489 system_pods.go:89] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.479460   67489 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.479472   67489 system_pods.go:89] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.479477   67489 system_pods.go:89] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.479491   67489 system_pods.go:126] duration metric: took 52.324012ms to wait for k8s-apps to be running ...
	I1028 18:34:15.479502   67489 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:15.479548   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:15.493743   67489 system_svc.go:56] duration metric: took 14.233947ms WaitForService to wait for kubelet
	I1028 18:34:15.493772   67489 kubeadm.go:582] duration metric: took 9.852243286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:15.493796   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:15.677127   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:15.677149   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:15.677156   67489 node_conditions.go:105] duration metric: took 183.355591ms to run NodePressure ...
	I1028 18:34:15.677167   67489 start.go:241] waiting for startup goroutines ...
	I1028 18:34:15.677174   67489 start.go:246] waiting for cluster config update ...
	I1028 18:34:15.677183   67489 start.go:255] writing updated cluster config ...
	I1028 18:34:15.677419   67489 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:15.731157   67489 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:15.732912   67489 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-692033" cluster and "default" namespace by default
	I1028 18:34:14.588528   66600 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:14.588660   66600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:14.588749   66600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:14.588886   66600 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:14.588985   66600 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:14.589089   66600 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:14.589179   66600 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:14.589268   66600 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:14.589362   66600 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:14.589472   66600 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:14.589575   66600 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:14.589638   66600 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:14.589739   66600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:14.902456   66600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:15.107236   66600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:15.198073   66600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:15.618175   66600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:15.804761   66600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:15.805675   66600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:15.809860   66600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:15.811538   66600 out.go:235]   - Booting up control plane ...
	I1028 18:34:15.811658   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:15.811761   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:15.812969   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:15.838182   66600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:15.846044   66600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:15.846126   66600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:15.981748   66600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:15.981899   66600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:16.483112   66600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262752ms
	I1028 18:34:16.483242   66600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:21.484655   66600 kubeadm.go:310] [api-check] The API server is healthy after 5.001327308s
	I1028 18:34:21.498067   66600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:21.508713   66600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:21.537520   66600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:21.537724   66600 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-021370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:21.551416   66600 kubeadm.go:310] [bootstrap-token] Using token: c2otm2.eh2uwearn2r38epe
	I1028 18:34:21.552613   66600 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:21.552721   66600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:21.556871   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:21.563570   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:21.566336   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:21.569226   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:21.575090   66600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:21.890874   66600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:22.315363   66600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:22.892050   66600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:22.892097   66600 kubeadm.go:310] 
	I1028 18:34:22.892198   66600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:22.892214   66600 kubeadm.go:310] 
	I1028 18:34:22.892297   66600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:22.892308   66600 kubeadm.go:310] 
	I1028 18:34:22.892346   66600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:22.892457   66600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:22.892549   66600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:22.892559   66600 kubeadm.go:310] 
	I1028 18:34:22.892628   66600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:22.892643   66600 kubeadm.go:310] 
	I1028 18:34:22.892705   66600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:22.892715   66600 kubeadm.go:310] 
	I1028 18:34:22.892784   66600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:22.892851   66600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:22.892958   66600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:22.892981   66600 kubeadm.go:310] 
	I1028 18:34:22.893093   66600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:22.893197   66600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:22.893212   66600 kubeadm.go:310] 
	I1028 18:34:22.893320   66600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893460   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:22.893506   66600 kubeadm.go:310] 	--control-plane 
	I1028 18:34:22.893515   66600 kubeadm.go:310] 
	I1028 18:34:22.893622   66600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:22.893631   66600 kubeadm.go:310] 
	I1028 18:34:22.893728   66600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893886   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:22.894813   66600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:22.895022   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:34:22.895037   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:22.897376   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:22.898532   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:22.909363   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:22.930151   66600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:22.930190   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:22.930280   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-021370 minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=embed-certs-021370 minikube.k8s.io/primary=true
	I1028 18:34:22.963249   66600 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:23.216574   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:23.717592   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.217674   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.717602   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.216832   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.717673   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.217668   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.716727   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.217476   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.343171   66600 kubeadm.go:1113] duration metric: took 4.413029537s to wait for elevateKubeSystemPrivileges
	I1028 18:34:27.343201   66600 kubeadm.go:394] duration metric: took 5m1.603783417s to StartCluster
	I1028 18:34:27.343221   66600 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.343302   66600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:27.344913   66600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.345149   66600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:27.345210   66600 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:27.345282   66600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-021370"
	I1028 18:34:27.345297   66600 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-021370"
	W1028 18:34:27.345304   66600 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:27.345310   66600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-021370"
	I1028 18:34:27.345339   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345337   66600 addons.go:69] Setting metrics-server=true in profile "embed-certs-021370"
	I1028 18:34:27.345353   66600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-021370"
	I1028 18:34:27.345360   66600 addons.go:234] Setting addon metrics-server=true in "embed-certs-021370"
	W1028 18:34:27.345369   66600 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:27.345381   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:27.345396   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345742   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345788   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345794   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345798   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.346770   66600 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:27.348169   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:27.361310   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1028 18:34:27.361763   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362073   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 18:34:27.362257   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.362292   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.362550   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362640   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363049   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.363079   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.363204   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.363242   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.363425   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363610   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.363934   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1028 18:34:27.364390   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.364865   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.364885   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.365229   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.365805   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.365852   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.367292   66600 addons.go:234] Setting addon default-storageclass=true in "embed-certs-021370"
	W1028 18:34:27.367314   66600 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:27.367347   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.367738   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.367782   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.381375   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1028 18:34:27.381846   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.382429   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.382441   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.382787   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.382926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.382965   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I1028 18:34:27.383568   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.384121   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.384134   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.384530   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.384730   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.384815   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386107   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I1028 18:34:27.386306   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386435   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.386888   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.386911   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.386977   66600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:27.387284   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.387866   66600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:27.387883   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.388259   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.388628   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:27.388645   66600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:27.388658   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.390614   66600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.390634   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:27.390650   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.393252   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393734   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.393758   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.394122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.394238   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.394364   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.394640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395084   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.395110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.395383   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.395540   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.395677   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.406551   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1028 18:34:27.406907   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.407358   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.407376   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.407699   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.407891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.409287   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.409489   66600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.409502   66600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:27.409517   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.412275   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412828   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.412858   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412984   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.413162   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.413303   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.413453   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.546891   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:27.571837   66600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595105   66600 node_ready.go:49] node "embed-certs-021370" has status "Ready":"True"
	I1028 18:34:27.595127   66600 node_ready.go:38] duration metric: took 23.255834ms for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595156   66600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:27.603107   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:27.635422   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.657051   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.666085   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:27.666110   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:27.706366   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:27.706394   66600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:27.772162   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:27.772191   66600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:27.844116   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:28.411454   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411478   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411522   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411544   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411751   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.411960   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.411982   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.411991   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411998   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.412223   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.412266   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413310   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413326   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413338   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.413344   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.413569   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413584   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.420867   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.420891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.421092   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.421168   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.421169   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957337   66600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.11317187s)
	I1028 18:34:28.957385   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957395   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957696   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957715   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957725   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957733   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957957   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957970   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957988   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957990   66600 addons.go:475] Verifying addon metrics-server=true in "embed-certs-021370"
	I1028 18:34:28.959590   66600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:28.961127   66600 addons.go:510] duration metric: took 1.615922156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:29.611126   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:32.110577   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:34.610544   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:37.111319   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.111342   66600 pod_ready.go:82] duration metric: took 9.508204126s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.111351   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119547   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.119571   66600 pod_ready.go:82] duration metric: took 8.212577ms for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119581   66600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126030   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.126048   66600 pod_ready.go:82] duration metric: took 6.46043ms for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126056   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132366   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.132386   66600 pod_ready.go:82] duration metric: took 6.323715ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132394   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137151   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.137171   66600 pod_ready.go:82] duration metric: took 4.770272ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137182   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507159   66600 pod_ready.go:93] pod "kube-proxy-nrr6g" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.507180   66600 pod_ready.go:82] duration metric: took 369.991591ms for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507189   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908006   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.908030   66600 pod_ready.go:82] duration metric: took 400.834669ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908038   66600 pod_ready.go:39] duration metric: took 10.312872321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:37.908052   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:37.908098   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:37.924515   66600 api_server.go:72] duration metric: took 10.579335154s to wait for apiserver process to appear ...
	I1028 18:34:37.924552   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:37.924572   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:34:37.929438   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:34:37.930716   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:37.930742   66600 api_server.go:131] duration metric: took 6.181503ms to wait for apiserver health ...
	I1028 18:34:37.930752   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:38.113401   66600 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:38.113430   66600 system_pods.go:61] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.113435   66600 system_pods.go:61] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.113439   66600 system_pods.go:61] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.113442   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.113446   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.113449   66600 system_pods.go:61] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.113452   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.113457   66600 system_pods.go:61] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.113462   66600 system_pods.go:61] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.113468   66600 system_pods.go:74] duration metric: took 182.711396ms to wait for pod list to return data ...
	I1028 18:34:38.113475   66600 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:38.309139   66600 default_sa.go:45] found service account: "default"
	I1028 18:34:38.309170   66600 default_sa.go:55] duration metric: took 195.688587ms for default service account to be created ...
	I1028 18:34:38.309182   66600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:38.510307   66600 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:38.510336   66600 system_pods.go:89] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.510341   66600 system_pods.go:89] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.510345   66600 system_pods.go:89] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.510349   66600 system_pods.go:89] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.510352   66600 system_pods.go:89] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.510355   66600 system_pods.go:89] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.510360   66600 system_pods.go:89] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.510368   66600 system_pods.go:89] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.510376   66600 system_pods.go:89] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.510391   66600 system_pods.go:126] duration metric: took 201.199416ms to wait for k8s-apps to be running ...
	I1028 18:34:38.510403   66600 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:38.510448   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:38.526043   66600 system_svc.go:56] duration metric: took 15.628796ms WaitForService to wait for kubelet
	I1028 18:34:38.526075   66600 kubeadm.go:582] duration metric: took 11.18089878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:38.526109   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:38.707568   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:38.707594   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:38.707604   66600 node_conditions.go:105] duration metric: took 181.491056ms to run NodePressure ...
	I1028 18:34:38.707615   66600 start.go:241] waiting for startup goroutines ...
	I1028 18:34:38.707621   66600 start.go:246] waiting for cluster config update ...
	I1028 18:34:38.707631   66600 start.go:255] writing updated cluster config ...
	I1028 18:34:38.707950   66600 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:38.755355   66600 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:38.757256   66600 out.go:177] * Done! kubectl is now configured to use "embed-certs-021370" cluster and "default" namespace by default
	I1028 18:34:49.381931   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:34:49.382111   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:34:49.383570   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:34:49.383633   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:49.383732   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:49.383859   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:49.383975   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:34:49.384073   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:49.385654   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:49.385757   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:49.385847   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:49.385937   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:49.386008   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:49.386118   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:49.386214   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:49.386316   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:49.386391   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:49.386478   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:49.386597   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:49.386643   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:49.386724   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:49.386813   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:49.386891   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:49.386983   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:49.387070   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:49.387209   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:49.387330   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:49.387389   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:49.387474   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:49.389653   67149 out.go:235]   - Booting up control plane ...
	I1028 18:34:49.389760   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:49.389867   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:49.389971   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:49.390088   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:49.390228   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:34:49.390277   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:34:49.390355   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390550   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390645   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390832   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390903   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391069   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391163   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391354   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391452   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391649   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391657   67149 kubeadm.go:310] 
	I1028 18:34:49.391691   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:34:49.391743   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:34:49.391758   67149 kubeadm.go:310] 
	I1028 18:34:49.391789   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:34:49.391822   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:34:49.391908   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:34:49.391914   67149 kubeadm.go:310] 
	I1028 18:34:49.392024   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:34:49.392073   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:34:49.392133   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:34:49.392142   67149 kubeadm.go:310] 
	I1028 18:34:49.392267   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:34:49.392363   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:34:49.392380   67149 kubeadm.go:310] 
	I1028 18:34:49.392525   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:34:49.392629   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:34:49.392737   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:34:49.392830   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:34:49.392879   67149 kubeadm.go:310] 
	W1028 18:34:49.392949   67149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:34:49.392991   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:34:49.869859   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:49.884524   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:49.896293   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:49.896318   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:49.896354   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:49.907312   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:49.907364   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:49.917926   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:49.928001   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:49.928048   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:49.938687   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.949217   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:49.949268   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.959955   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:49.970105   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:49.970156   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:49.980760   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:50.212973   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:36:46.686631   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:36:46.686753   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:36:46.688224   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:36:46.688325   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:36:46.688449   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:36:46.688587   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:36:46.688726   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:36:46.688813   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:36:46.690320   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:36:46.690427   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:36:46.690524   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:36:46.690627   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:36:46.690720   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:36:46.690824   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:36:46.690897   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:36:46.690984   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:36:46.691064   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:36:46.691161   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:36:46.691253   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:36:46.691309   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:36:46.691379   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:36:46.691426   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:36:46.691471   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:36:46.691547   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:36:46.691619   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:36:46.691713   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:36:46.691814   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:36:46.691864   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:36:46.691951   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:36:46.693258   67149 out.go:235]   - Booting up control plane ...
	I1028 18:36:46.693374   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:36:46.693471   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:36:46.693566   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:36:46.693682   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:36:46.693870   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:36:46.693930   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:36:46.694023   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694253   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694343   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694527   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694614   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694798   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694894   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695053   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695119   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695315   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695324   67149 kubeadm.go:310] 
	I1028 18:36:46.695357   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:36:46.695392   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:36:46.695398   67149 kubeadm.go:310] 
	I1028 18:36:46.695427   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:36:46.695456   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:36:46.695542   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:36:46.695549   67149 kubeadm.go:310] 
	I1028 18:36:46.695665   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:36:46.695717   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:36:46.695767   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:36:46.695781   67149 kubeadm.go:310] 
	I1028 18:36:46.695921   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:36:46.696037   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:36:46.696048   67149 kubeadm.go:310] 
	I1028 18:36:46.696177   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:36:46.696285   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:36:46.696390   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:36:46.696512   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:36:46.696560   67149 kubeadm.go:310] 
	I1028 18:36:46.696579   67149 kubeadm.go:394] duration metric: took 7m56.601380499s to StartCluster
	I1028 18:36:46.696618   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:36:46.696670   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:36:46.738714   67149 cri.go:89] found id: ""
	I1028 18:36:46.738741   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.738749   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:36:46.738757   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:36:46.738822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:36:46.772906   67149 cri.go:89] found id: ""
	I1028 18:36:46.772934   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.772944   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:36:46.772951   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:36:46.773028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:36:46.808785   67149 cri.go:89] found id: ""
	I1028 18:36:46.808809   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.808819   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:36:46.808827   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:36:46.808884   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:36:46.842977   67149 cri.go:89] found id: ""
	I1028 18:36:46.843007   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.843016   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:36:46.843022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:36:46.843095   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:36:46.878121   67149 cri.go:89] found id: ""
	I1028 18:36:46.878148   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.878159   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:36:46.878166   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:36:46.878231   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:36:46.911953   67149 cri.go:89] found id: ""
	I1028 18:36:46.911977   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.911984   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:36:46.911990   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:36:46.912054   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:36:46.944291   67149 cri.go:89] found id: ""
	I1028 18:36:46.944317   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.944324   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:36:46.944329   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:36:46.944379   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:36:46.976525   67149 cri.go:89] found id: ""
	I1028 18:36:46.976554   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.976564   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:36:46.976575   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:36:46.976588   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:36:47.026517   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:36:47.026544   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:36:47.041198   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:36:47.041231   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:36:47.115650   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:36:47.115681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:36:47.115695   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:36:47.218059   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:36:47.218093   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 18:36:47.257114   67149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:36:47.257182   67149 out.go:270] * 
	W1028 18:36:47.257240   67149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.257280   67149 out.go:270] * 
	W1028 18:36:47.258088   67149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:36:47.261521   67149 out.go:201] 
	W1028 18:36:47.262707   67149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.262742   67149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:36:47.262760   67149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:36:47.264073   67149 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 18:36:48 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:48.991879258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140608991857841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d5c993e-d0bf-438c-ab90-9fe1641df799 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:36:48 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:48.992391167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74da6952-161d-4d3b-97e8-9e89f6a5e7a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:48 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:48.992438829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74da6952-161d-4d3b-97e8-9e89f6a5e7a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:48 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:48.992473779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=74da6952-161d-4d3b-97e8-9e89f6a5e7a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.024576162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=314007c8-532a-4f0f-9928-c09462c7aa75 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.024702271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=314007c8-532a-4f0f-9928-c09462c7aa75 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.026080476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47d316e1-9ce0-4cb0-91ab-987619aa071b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.026420483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140609026402424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47d316e1-9ce0-4cb0-91ab-987619aa071b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.026938497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca0544da-7493-410c-a75c-f8574aebe6f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.026982487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca0544da-7493-410c-a75c-f8574aebe6f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.027015392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ca0544da-7493-410c-a75c-f8574aebe6f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.062917785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87fca5c5-f5ff-4aa7-99e3-fc5ddd01cd98 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.063007420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87fca5c5-f5ff-4aa7-99e3-fc5ddd01cd98 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.064484358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fef1a0c-f094-4e43-9954-52ac6ff28290 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.065061001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140609065031367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fef1a0c-f094-4e43-9954-52ac6ff28290 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.065828421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ebd4d40-d1ee-443f-b810-548bb663040f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.065893127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ebd4d40-d1ee-443f-b810-548bb663040f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.065938283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8ebd4d40-d1ee-443f-b810-548bb663040f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.103094162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5ead362-9026-4865-b8b4-59228eef4428 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.103199366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5ead362-9026-4865-b8b4-59228eef4428 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.104783036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da75d9b6-adb9-4dd8-9f0b-e94388abb729 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.105272821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140609105243492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da75d9b6-adb9-4dd8-9f0b-e94388abb729 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.105917294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acaf6f0d-4321-415f-acd1-2b326563d046 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.105983762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acaf6f0d-4321-415f-acd1-2b326563d046 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:36:49 old-k8s-version-223868 crio[633]: time="2024-10-28 18:36:49.106035105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=acaf6f0d-4321-415f-acd1-2b326563d046 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 18:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052154] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040854] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.948848] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.654628] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568759] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.229575] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.078716] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057084] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.217028] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.132211] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.266373] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +7.871428] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.072119] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.097659] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[Oct28 18:29] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 18:32] systemd-fstab-generator[5063]: Ignoring "noauto" option for root device
	[Oct28 18:34] systemd-fstab-generator[5342]: Ignoring "noauto" option for root device
	[  +0.070292] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:36:49 up 8 min,  0 users,  load average: 0.02, 0.14, 0.09
	Linux old-k8s-version-223868 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000264fc0, 0xc000ba5200, 0xc000ba5200, 0x0, 0x0)
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0006dc700)
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: goroutine 149 [runnable]:
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b73cc0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bac720, 0x0, 0x0)
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0006dc700)
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 28 18:36:46 old-k8s-version-223868 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 28 18:36:46 old-k8s-version-223868 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 18:36:46 old-k8s-version-223868 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 18:36:47 old-k8s-version-223868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 28 18:36:47 old-k8s-version-223868 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 18:36:47 old-k8s-version-223868 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 18:36:47 old-k8s-version-223868 kubelet[5587]: I1028 18:36:47.398688    5587 server.go:416] Version: v1.20.0
	Oct 28 18:36:47 old-k8s-version-223868 kubelet[5587]: I1028 18:36:47.399048    5587 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 18:36:47 old-k8s-version-223868 kubelet[5587]: I1028 18:36:47.401460    5587 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 18:36:47 old-k8s-version-223868 kubelet[5587]: W1028 18:36:47.402378    5587 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 28 18:36:47 old-k8s-version-223868 kubelet[5587]: I1028 18:36:47.402577    5587 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (222.619212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-223868" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (734.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033: exit status 3 (3.167853162s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:25:25.888795   67362 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E1028 18:25:25.888820   67362 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-692033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-692033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152437029s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-692033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
E1028 18:25:33.435669   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033: exit status 3 (3.063162121s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 18:25:35.104806   67443 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E1028 18:25:35.104826   67443 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-692033" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-051152 -n no-preload-051152
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 18:42:47.751957576 +0000 UTC m=+5807.210001881
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-051152 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-051152 logs -n 25: (1.959160592s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-703793                              | running-upgrade-703793       | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-021370            | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:25:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:25:35.146308   67489 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:25:35.146467   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146474   67489 out.go:358] Setting ErrFile to fd 2...
	I1028 18:25:35.146480   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146973   67489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:25:35.147825   67489 out.go:352] Setting JSON to false
	I1028 18:25:35.148718   67489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7678,"bootTime":1730132257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:25:35.148810   67489 start.go:139] virtualization: kvm guest
	I1028 18:25:35.150695   67489 out.go:177] * [default-k8s-diff-port-692033] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:25:35.151797   67489 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:25:35.151797   67489 notify.go:220] Checking for updates...
	I1028 18:25:35.154193   67489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:25:35.155491   67489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:25:35.156576   67489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:25:35.157619   67489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:25:35.158702   67489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:25:35.160202   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:25:35.160602   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.160658   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.175095   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I1028 18:25:35.175421   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.175848   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.175863   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.176187   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.176387   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.176667   67489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:25:35.177210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.177325   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.191270   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I1028 18:25:35.191687   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.192092   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.192114   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.192388   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.192551   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.222738   67489 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:25:35.223900   67489 start.go:297] selected driver: kvm2
	I1028 18:25:35.223910   67489 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.224018   67489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:25:35.224696   67489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.224770   67489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:25:35.238839   67489 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:25:35.239228   67489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:25:35.239258   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:25:35.239310   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:25:35.239360   67489 start.go:340] cluster config:
	{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.239480   67489 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.241175   67489 out.go:177] * Starting "default-k8s-diff-port-692033" primary control-plane node in "default-k8s-diff-port-692033" cluster
	I1028 18:25:37.248702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:35.242393   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:25:35.242423   67489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:25:35.242432   67489 cache.go:56] Caching tarball of preloaded images
	I1028 18:25:35.242504   67489 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:25:35.242517   67489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:25:35.242600   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:25:35.242763   67489 start.go:360] acquireMachinesLock for default-k8s-diff-port-692033: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:25:40.320712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:46.400713   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:49.472709   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:55.552712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:58.624703   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:04.704707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:07.776740   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:13.856735   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:16.928744   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:23.008721   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:26.080668   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:32.160706   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:35.232663   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:41.312774   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:44.384739   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:50.464729   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:53.536702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:59.616750   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:02.688719   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:08.768731   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:11.840771   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:17.920756   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:20.992753   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:27.072785   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:30.144726   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:36.224704   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:39.296825   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:45.376692   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:48.448699   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:54.528707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:57.600754   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:28:00.605468   66801 start.go:364] duration metric: took 4m12.368996576s to acquireMachinesLock for "no-preload-051152"
	I1028 18:28:00.605517   66801 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:00.605525   66801 fix.go:54] fixHost starting: 
	I1028 18:28:00.605815   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:00.605850   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:00.621828   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1028 18:28:00.622237   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:00.622654   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:28:00.622674   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:00.622975   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:00.623150   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:00.623272   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:28:00.624880   66801 fix.go:112] recreateIfNeeded on no-preload-051152: state=Stopped err=<nil>
	I1028 18:28:00.624910   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	W1028 18:28:00.625076   66801 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:00.627065   66801 out.go:177] * Restarting existing kvm2 VM for "no-preload-051152" ...
	I1028 18:28:00.603089   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:00.603122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603425   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:28:00.603450   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603663   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:28:00.605343   66600 machine.go:96] duration metric: took 4m37.432159141s to provisionDockerMachine
	I1028 18:28:00.605380   66600 fix.go:56] duration metric: took 4m37.452432846s for fixHost
	I1028 18:28:00.605387   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 4m37.452449736s
	W1028 18:28:00.605419   66600 start.go:714] error starting host: provision: host is not running
	W1028 18:28:00.605517   66600 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 18:28:00.605528   66600 start.go:729] Will try again in 5 seconds ...
	I1028 18:28:00.628172   66801 main.go:141] libmachine: (no-preload-051152) Calling .Start
	I1028 18:28:00.628308   66801 main.go:141] libmachine: (no-preload-051152) Ensuring networks are active...
	I1028 18:28:00.629123   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network default is active
	I1028 18:28:00.629467   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network mk-no-preload-051152 is active
	I1028 18:28:00.629782   66801 main.go:141] libmachine: (no-preload-051152) Getting domain xml...
	I1028 18:28:00.630687   66801 main.go:141] libmachine: (no-preload-051152) Creating domain...
	I1028 18:28:01.819872   66801 main.go:141] libmachine: (no-preload-051152) Waiting to get IP...
	I1028 18:28:01.820792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:01.821214   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:01.821287   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:01.821204   68016 retry.go:31] will retry after 269.081621ms: waiting for machine to come up
	I1028 18:28:02.091799   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.092220   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.092242   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.092175   68016 retry.go:31] will retry after 341.926163ms: waiting for machine to come up
	I1028 18:28:02.435679   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.436035   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.436067   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.435982   68016 retry.go:31] will retry after 355.739166ms: waiting for machine to come up
	I1028 18:28:02.793549   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.793928   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.793953   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.793881   68016 retry.go:31] will retry after 496.396184ms: waiting for machine to come up
	I1028 18:28:05.607678   66600 start.go:360] acquireMachinesLock for embed-certs-021370: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:28:03.291568   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.292038   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.292068   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.291978   68016 retry.go:31] will retry after 561.311245ms: waiting for machine to come up
	I1028 18:28:03.854782   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.855137   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.855166   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.855088   68016 retry.go:31] will retry after 574.675969ms: waiting for machine to come up
	I1028 18:28:04.431784   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:04.432226   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:04.432250   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:04.432177   68016 retry.go:31] will retry after 1.028136295s: waiting for machine to come up
	I1028 18:28:05.461477   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:05.461839   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:05.461869   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:05.461795   68016 retry.go:31] will retry after 955.343831ms: waiting for machine to come up
	I1028 18:28:06.418161   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:06.418629   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:06.418659   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:06.418576   68016 retry.go:31] will retry after 1.615930502s: waiting for machine to come up
	I1028 18:28:08.036275   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:08.036641   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:08.036662   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:08.036615   68016 retry.go:31] will retry after 2.111463198s: waiting for machine to come up
	I1028 18:28:10.150891   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:10.151403   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:10.151429   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:10.151351   68016 retry.go:31] will retry after 2.35232289s: waiting for machine to come up
	I1028 18:28:12.506070   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:12.506471   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:12.506494   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:12.506447   68016 retry.go:31] will retry after 2.874687772s: waiting for machine to come up
	I1028 18:28:15.384360   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:15.384680   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:15.384712   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:15.384636   68016 retry.go:31] will retry after 3.299950406s: waiting for machine to come up
	I1028 18:28:19.893083   67149 start.go:364] duration metric: took 3m43.747535803s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:28:19.893161   67149 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:19.893170   67149 fix.go:54] fixHost starting: 
	I1028 18:28:19.893556   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:19.893608   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:19.909857   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I1028 18:28:19.910215   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:19.910669   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:28:19.910690   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:19.911049   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:19.911241   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:19.911395   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:28:19.912825   67149 fix.go:112] recreateIfNeeded on old-k8s-version-223868: state=Stopped err=<nil>
	I1028 18:28:19.912856   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	W1028 18:28:19.912996   67149 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:19.915041   67149 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	I1028 18:28:19.916422   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .Start
	I1028 18:28:19.916611   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:28:19.917295   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:28:19.917560   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:28:19.917951   67149 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:28:19.918628   67149 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:28:18.688243   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.688710   66801 main.go:141] libmachine: (no-preload-051152) Found IP for machine: 192.168.61.78
	I1028 18:28:18.688738   66801 main.go:141] libmachine: (no-preload-051152) Reserving static IP address...
	I1028 18:28:18.688754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has current primary IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.689151   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.689174   66801 main.go:141] libmachine: (no-preload-051152) Reserved static IP address: 192.168.61.78
	I1028 18:28:18.689188   66801 main.go:141] libmachine: (no-preload-051152) DBG | skip adding static IP to network mk-no-preload-051152 - found existing host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"}
	I1028 18:28:18.689198   66801 main.go:141] libmachine: (no-preload-051152) Waiting for SSH to be available...
	I1028 18:28:18.689217   66801 main.go:141] libmachine: (no-preload-051152) DBG | Getting to WaitForSSH function...
	I1028 18:28:18.691372   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691721   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.691754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691861   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH client type: external
	I1028 18:28:18.691890   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa (-rw-------)
	I1028 18:28:18.691950   66801 main.go:141] libmachine: (no-preload-051152) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:18.691967   66801 main.go:141] libmachine: (no-preload-051152) DBG | About to run SSH command:
	I1028 18:28:18.691979   66801 main.go:141] libmachine: (no-preload-051152) DBG | exit 0
	I1028 18:28:18.816169   66801 main.go:141] libmachine: (no-preload-051152) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:18.816571   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetConfigRaw
	I1028 18:28:18.817209   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:18.819569   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.819891   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.819913   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.820164   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:28:18.820375   66801 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:18.820392   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:18.820618   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.822580   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.822953   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.822983   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.823096   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.823250   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823390   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823537   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.823687   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.823878   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.823890   66801 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:18.932489   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:18.932516   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.932769   66801 buildroot.go:166] provisioning hostname "no-preload-051152"
	I1028 18:28:18.932798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.933003   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.935565   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.935938   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.935965   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.936147   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.936346   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936513   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936674   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.936838   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.936994   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.937006   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-051152 && echo "no-preload-051152" | sudo tee /etc/hostname
	I1028 18:28:19.057840   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-051152
	
	I1028 18:28:19.057872   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.060536   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.060917   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.060946   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.061068   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.061237   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061405   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061544   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.061700   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.061848   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.061863   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-051152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-051152/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-051152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:19.180890   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:19.180920   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:19.180957   66801 buildroot.go:174] setting up certificates
	I1028 18:28:19.180971   66801 provision.go:84] configureAuth start
	I1028 18:28:19.180985   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:19.181299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.183792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184144   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.184172   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184309   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.186298   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186588   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.186616   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186722   66801 provision.go:143] copyHostCerts
	I1028 18:28:19.186790   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:19.186804   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:19.186868   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:19.186974   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:19.186986   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:19.187023   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:19.187107   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:19.187115   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:19.187146   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:19.187197   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.no-preload-051152 san=[127.0.0.1 192.168.61.78 localhost minikube no-preload-051152]
	I1028 18:28:19.275109   66801 provision.go:177] copyRemoteCerts
	I1028 18:28:19.275175   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:19.275200   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.278392   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.278946   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.278978   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.279183   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.279454   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.279651   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.279789   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.362094   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:19.384635   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:28:19.406649   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:19.428807   66801 provision.go:87] duration metric: took 247.825267ms to configureAuth
	I1028 18:28:19.428830   66801 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:19.429026   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:28:19.429090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.431615   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.431928   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.431954   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.432090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.432278   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432434   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432602   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.432786   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.432932   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.432946   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:19.655137   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:19.655163   66801 machine.go:96] duration metric: took 834.775161ms to provisionDockerMachine
	I1028 18:28:19.655175   66801 start.go:293] postStartSetup for "no-preload-051152" (driver="kvm2")
	I1028 18:28:19.655185   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:19.655199   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.655509   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:19.655532   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.658099   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658411   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.658442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658566   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.658744   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.658884   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.659013   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.743030   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:19.746986   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:19.747007   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:19.747081   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:19.747177   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:19.747290   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:19.756378   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:19.779243   66801 start.go:296] duration metric: took 124.056855ms for postStartSetup
	I1028 18:28:19.779283   66801 fix.go:56] duration metric: took 19.173756385s for fixHost
	I1028 18:28:19.779305   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.781887   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782205   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.782226   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782367   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.782557   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782709   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782836   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.782999   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.783180   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.783191   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:19.892920   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140099.866892804
	
	I1028 18:28:19.892944   66801 fix.go:216] guest clock: 1730140099.866892804
	I1028 18:28:19.892954   66801 fix.go:229] Guest: 2024-10-28 18:28:19.866892804 +0000 UTC Remote: 2024-10-28 18:28:19.779287594 +0000 UTC m=+271.674302547 (delta=87.60521ms)
	I1028 18:28:19.892997   66801 fix.go:200] guest clock delta is within tolerance: 87.60521ms
	I1028 18:28:19.893008   66801 start.go:83] releasing machines lock for "no-preload-051152", held for 19.287505767s
	I1028 18:28:19.893034   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.893299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.895775   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896177   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.896204   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896362   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.896826   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897023   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897133   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:19.897171   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.897267   66801 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:19.897291   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.899703   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.899995   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900031   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900054   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900208   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900374   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900416   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900550   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.900626   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900707   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.900818   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900944   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.901098   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.982201   66801 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:20.008913   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:20.157816   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:20.165773   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:20.165837   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:20.187342   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:20.187359   66801 start.go:495] detecting cgroup driver to use...
	I1028 18:28:20.187423   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:20.204825   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:20.220702   66801 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:20.220776   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:20.238812   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:20.253664   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:20.363567   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:20.534475   66801 docker.go:233] disabling docker service ...
	I1028 18:28:20.534564   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:20.548424   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:20.564292   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:20.687135   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:20.796225   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:20.810327   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:20.828804   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:28:20.828866   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.838719   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:20.838768   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.849166   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.862811   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.875223   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:20.885402   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.895602   66801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.914163   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.924194   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:20.934907   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:20.934958   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:20.948898   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:20.958955   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:21.069438   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:21.175294   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:21.175379   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:21.179886   66801 start.go:563] Will wait 60s for crictl version
	I1028 18:28:21.179942   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.184195   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:21.226939   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:21.227043   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.254702   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.284607   66801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:28:21.285906   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:21.288560   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.288918   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:21.288945   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.289132   66801 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:21.293108   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:21.307303   66801 kubeadm.go:883] updating cluster {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:21.307447   66801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:28:21.307495   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:21.347493   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:28:21.347520   66801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:21.347595   66801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.347609   66801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.347621   66801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.347656   66801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.347690   66801 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 18:28:21.347691   66801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.347758   66801 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.347695   66801 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349312   66801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.349387   66801 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 18:28:21.349402   66801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.349526   66801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.349574   66801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.349582   66801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.349632   66801 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349311   66801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.515246   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.515760   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.543817   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 18:28:21.551755   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.562433   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.594208   66801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 18:28:21.594257   66801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.594291   66801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 18:28:21.594317   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.594323   66801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.594364   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.666046   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.666654   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.757831   66801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 18:28:21.757867   66801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.757867   66801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 18:28:21.757894   66801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.757914   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757926   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.757937   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757982   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.758142   66801 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 18:28:21.758161   66801 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 18:28:21.758197   66801 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.758169   66801 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.758234   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.758270   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.813746   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.813792   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.813836   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.813837   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.813840   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.813890   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.934434   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.958229   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.958287   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.958377   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.958381   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.958467   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.053179   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 18:28:22.053304   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.053351   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 18:28:22.053447   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:22.087756   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:22.087762   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:22.087826   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:22.087867   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.087897   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 18:28:22.087907   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087938   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087942   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 18:28:22.161136   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 18:28:22.161259   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:22.201924   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 18:28:22.201967   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 18:28:22.202032   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:22.202068   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:21.207941   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:28:21.209066   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.209518   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.209604   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.209495   68155 retry.go:31] will retry after 258.02952ms: waiting for machine to come up
	I1028 18:28:21.468599   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.469034   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.469052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.468996   68155 retry.go:31] will retry after 389.053264ms: waiting for machine to come up
	I1028 18:28:21.859493   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.859987   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.860017   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.859929   68155 retry.go:31] will retry after 454.438888ms: waiting for machine to come up
	I1028 18:28:22.315484   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.315961   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.315988   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.315904   68155 retry.go:31] will retry after 531.549561ms: waiting for machine to come up
	I1028 18:28:22.849247   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.849736   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.849791   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.849693   68155 retry.go:31] will retry after 602.202649ms: waiting for machine to come up
	I1028 18:28:23.453311   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:23.453859   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:23.453887   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:23.453796   68155 retry.go:31] will retry after 836.622626ms: waiting for machine to come up
	I1028 18:28:24.291959   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:24.292286   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:24.292315   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:24.292252   68155 retry.go:31] will retry after 1.187276744s: waiting for machine to come up
	I1028 18:28:25.480962   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:25.481398   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:25.481417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:25.481350   68155 retry.go:31] will retry after 1.417127806s: waiting for machine to come up
	I1028 18:28:23.586400   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.127903   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3: (2.040063682s)
	I1028 18:28:24.127962   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 18:28:24.127967   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (1.966690859s)
	I1028 18:28:24.127991   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 18:28:24.128010   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.925953727s)
	I1028 18:28:24.128034   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.925947261s)
	I1028 18:28:24.128041   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 18:28:24.128048   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 18:28:24.127904   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.03994028s)
	I1028 18:28:24.128069   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:24.128085   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 18:28:24.128109   66801 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 18:28:24.128123   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.128138   66801 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.128166   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:24.128180   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.132734   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 18:28:26.097200   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.9689964s)
	I1028 18:28:26.097240   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 18:28:26.097241   66801 ssh_runner.go:235] Completed: which crictl: (1.969052863s)
	I1028 18:28:26.097264   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.097308   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:26.097309   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.900944   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:26.901481   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:26.901511   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:26.901426   68155 retry.go:31] will retry after 1.766762252s: waiting for machine to come up
	I1028 18:28:28.670334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:28.670798   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:28.670827   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:28.670742   68155 retry.go:31] will retry after 2.287152926s: waiting for machine to come up
	I1028 18:28:30.959639   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:30.959947   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:30.959963   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:30.959917   68155 retry.go:31] will retry after 1.799223833s: waiting for machine to come up
	I1028 18:28:28.165293   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.067952153s)
	I1028 18:28:28.165410   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:28.165497   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.068111312s)
	I1028 18:28:28.165523   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 18:28:28.165548   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.165591   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.208189   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:30.152411   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.986796263s)
	I1028 18:28:30.152458   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 18:28:30.152496   66801 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152504   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.944281988s)
	I1028 18:28:30.152550   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152556   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 18:28:30.152652   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:32.761498   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:32.761941   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:32.761968   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:32.761894   68155 retry.go:31] will retry after 2.231065891s: waiting for machine to come up
	I1028 18:28:34.994438   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:34.994902   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:34.994936   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:34.994847   68155 retry.go:31] will retry after 4.079794439s: waiting for machine to come up
	I1028 18:28:33.842059   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.689484833s)
	I1028 18:28:33.842109   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 18:28:33.842138   66801 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:33.842155   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.68947822s)
	I1028 18:28:33.842184   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 18:28:33.842206   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:35.714458   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.872222439s)
	I1028 18:28:35.714493   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 18:28:35.714521   66801 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:35.714567   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:36.568124   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 18:28:36.568177   66801 cache_images.go:123] Successfully loaded all cached images
	I1028 18:28:36.568185   66801 cache_images.go:92] duration metric: took 15.220649269s to LoadCachedImages
	I1028 18:28:36.568199   66801 kubeadm.go:934] updating node { 192.168.61.78 8443 v1.31.2 crio true true} ...
	I1028 18:28:36.568310   66801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-051152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:36.568383   66801 ssh_runner.go:195] Run: crio config
	I1028 18:28:36.613400   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:36.613425   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:36.613435   66801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:36.613454   66801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.78 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-051152 NodeName:no-preload-051152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:28:36.613596   66801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-051152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:36.613669   66801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:28:36.624493   66801 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:36.624553   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:36.633828   66801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 18:28:36.649661   66801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:36.665454   66801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1028 18:28:36.681280   66801 ssh_runner.go:195] Run: grep 192.168.61.78	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:36.685010   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:36.697177   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:36.823266   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:36.840346   66801 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152 for IP: 192.168.61.78
	I1028 18:28:36.840366   66801 certs.go:194] generating shared ca certs ...
	I1028 18:28:36.840380   66801 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:36.840538   66801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:36.840578   66801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:36.840586   66801 certs.go:256] generating profile certs ...
	I1028 18:28:36.840661   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.key
	I1028 18:28:36.840722   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key.262d982c
	I1028 18:28:36.840758   66801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key
	I1028 18:28:36.840859   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:36.840892   66801 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:36.840902   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:36.840922   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:36.840943   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:36.840971   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:36.841025   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:36.841818   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:36.881548   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:36.907084   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:36.947810   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:36.976268   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 18:28:37.003795   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:28:37.036252   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:37.059731   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:28:37.083467   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:37.106397   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:37.128719   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:37.151133   66801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:37.166917   66801 ssh_runner.go:195] Run: openssl version
	I1028 18:28:37.172387   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:37.182117   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186329   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186389   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.191925   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:37.201799   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:37.211620   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215889   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215923   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.221588   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:37.231983   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:37.242291   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246869   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246904   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.252408   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:37.262946   66801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:37.267334   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:37.273164   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:37.278831   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:37.284778   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:37.290547   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:37.296195   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:37.301915   66801 kubeadm.go:392] StartCluster: {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:37.301986   66801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:37.302037   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.345115   66801 cri.go:89] found id: ""
	I1028 18:28:37.345185   66801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:37.355312   66801 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:37.355328   66801 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:37.355370   66801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:37.364777   66801 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:37.366056   66801 kubeconfig.go:125] found "no-preload-051152" server: "https://192.168.61.78:8443"
	I1028 18:28:37.368829   66801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:37.378010   66801 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.78
	I1028 18:28:37.378039   66801 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:37.378047   66801 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:37.378083   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.413442   66801 cri.go:89] found id: ""
	I1028 18:28:37.413522   66801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:37.428998   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:37.438365   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:37.438391   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:37.438442   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:37.447260   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:37.447310   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:37.456615   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:37.465292   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:37.465351   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:37.474382   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.482957   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:37.483012   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.491991   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:37.500635   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:37.500709   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:37.509632   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:37.518808   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:37.642796   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:40.421350   67489 start.go:364] duration metric: took 3m5.178550845s to acquireMachinesLock for "default-k8s-diff-port-692033"
	I1028 18:28:40.421416   67489 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:40.421430   67489 fix.go:54] fixHost starting: 
	I1028 18:28:40.421843   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:40.421894   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:40.439583   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I1028 18:28:40.440133   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:40.440679   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:28:40.440701   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:40.441025   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:40.441198   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:40.441359   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:28:40.443029   67489 fix.go:112] recreateIfNeeded on default-k8s-diff-port-692033: state=Stopped err=<nil>
	I1028 18:28:40.443055   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	W1028 18:28:40.443202   67489 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:40.445489   67489 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-692033" ...
	I1028 18:28:39.079052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079556   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079584   67149 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:28:39.079593   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:28:39.079888   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.079919   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | skip adding static IP to network mk-old-k8s-version-223868 - found existing host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"}
	I1028 18:28:39.079935   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:28:39.079955   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:28:39.079971   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:28:39.082041   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.082354   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082480   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:28:39.082500   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:28:39.082528   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:39.082555   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:28:39.082567   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:28:39.204523   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:39.204883   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:28:39.205526   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.208073   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208434   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.208478   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208709   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:28:39.208907   67149 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:39.208926   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:39.209144   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.211109   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211407   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.211437   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.211739   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.211888   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.212033   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.212218   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.212388   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.212398   67149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:39.316528   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:39.316566   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.316813   67149 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:28:39.316841   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.317028   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.319389   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319687   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.319713   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319836   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.320017   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320167   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320310   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.320458   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.320642   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.320656   67149 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:28:39.439149   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:28:39.439179   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.441957   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442268   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.442300   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442528   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.442736   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.442940   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.443122   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.443304   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.443525   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.443550   67149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:39.561619   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:39.561651   67149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:39.561702   67149 buildroot.go:174] setting up certificates
	I1028 18:28:39.561716   67149 provision.go:84] configureAuth start
	I1028 18:28:39.561731   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.562015   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.564838   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565195   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.565229   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565373   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.567875   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568262   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.568287   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568452   67149 provision.go:143] copyHostCerts
	I1028 18:28:39.568534   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:39.568553   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:39.568621   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:39.568745   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:39.568768   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:39.568810   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:39.568899   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:39.568911   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:39.568937   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:39.569006   67149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:28:39.786398   67149 provision.go:177] copyRemoteCerts
	I1028 18:28:39.786449   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:39.786482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.789048   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789373   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.789417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789535   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.789733   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.789884   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.790013   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:39.871816   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:39.902889   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:28:39.932633   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:39.958581   67149 provision.go:87] duration metric: took 396.851161ms to configureAuth
	I1028 18:28:39.958609   67149 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:39.958796   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:28:39.958881   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.961667   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962019   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.962044   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962240   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.962468   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962671   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962850   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.963037   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.963220   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.963239   67149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:40.179808   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:40.179843   67149 machine.go:96] duration metric: took 970.91659ms to provisionDockerMachine
	I1028 18:28:40.179857   67149 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:28:40.179869   67149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:40.179917   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.180287   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:40.180319   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.183011   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183383   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.183411   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183578   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.183770   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.183964   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.184114   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.270445   67149 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:40.275798   67149 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:40.275825   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:40.275898   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:40.275995   67149 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:40.276108   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:40.287529   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:40.310860   67149 start.go:296] duration metric: took 130.989944ms for postStartSetup
	I1028 18:28:40.310899   67149 fix.go:56] duration metric: took 20.417730265s for fixHost
	I1028 18:28:40.310925   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.313613   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.313967   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.314000   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.314175   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.314354   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314518   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314692   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.314862   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:40.315021   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:40.315032   67149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:40.421204   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140120.384024791
	
	I1028 18:28:40.421225   67149 fix.go:216] guest clock: 1730140120.384024791
	I1028 18:28:40.421235   67149 fix.go:229] Guest: 2024-10-28 18:28:40.384024791 +0000 UTC Remote: 2024-10-28 18:28:40.310903937 +0000 UTC m=+244.300202669 (delta=73.120854ms)
	I1028 18:28:40.421259   67149 fix.go:200] guest clock delta is within tolerance: 73.120854ms
	I1028 18:28:40.421265   67149 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 20.528130845s
	I1028 18:28:40.421297   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.421574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:40.424700   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425088   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.425116   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425286   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.425971   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426188   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426266   67149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:40.426340   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.426604   67149 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:40.426632   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.429408   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429569   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429807   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.429841   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429950   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430059   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.430092   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.430123   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430236   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430383   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430459   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430616   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.430614   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.509203   67149 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:40.540019   67149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:40.701732   67149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:40.710264   67149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:40.710354   67149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:40.731373   67149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:40.731398   67149 start.go:495] detecting cgroup driver to use...
	I1028 18:28:40.731465   67149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:40.751312   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:40.766288   67149 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:40.766399   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:40.783995   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:40.800295   67149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:40.940688   67149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:41.101493   67149 docker.go:233] disabling docker service ...
	I1028 18:28:41.101562   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:41.123350   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:41.141744   67149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:41.279020   67149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:41.414748   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:41.429469   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:41.448611   67149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:28:41.448669   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.460766   67149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:41.460842   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.473021   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.485888   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.497498   67149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:41.509250   67149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:41.519701   67149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:41.519754   67149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:41.534596   67149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:41.544814   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:41.681203   67149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:41.786879   67149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:41.786957   67149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:41.791981   67149 start.go:563] Will wait 60s for crictl version
	I1028 18:28:41.792041   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:41.796034   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:41.839867   67149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:41.839958   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.873029   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.904534   67149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:28:38.508232   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.720400   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.784720   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.892007   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:38.892083   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.392953   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.892228   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.912702   66801 api_server.go:72] duration metric: took 1.020696043s to wait for apiserver process to appear ...
	I1028 18:28:39.912728   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:28:39.912749   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:39.913221   66801 api_server.go:269] stopped: https://192.168.61.78:8443/healthz: Get "https://192.168.61.78:8443/healthz": dial tcp 192.168.61.78:8443: connect: connection refused
	I1028 18:28:40.413025   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:40.446984   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Start
	I1028 18:28:40.447191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring networks are active...
	I1028 18:28:40.447998   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network default is active
	I1028 18:28:40.448350   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network mk-default-k8s-diff-port-692033 is active
	I1028 18:28:40.448884   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Getting domain xml...
	I1028 18:28:40.449664   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Creating domain...
	I1028 18:28:41.740010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting to get IP...
	I1028 18:28:41.740827   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741273   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:41.741192   68341 retry.go:31] will retry after 276.06097ms: waiting for machine to come up
	I1028 18:28:42.018700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019135   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019159   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.019089   68341 retry.go:31] will retry after 318.252876ms: waiting for machine to come up
	I1028 18:28:42.338630   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339287   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339312   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.339205   68341 retry.go:31] will retry after 428.196122ms: waiting for machine to come up
	I1028 18:28:42.768656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769225   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769248   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.769134   68341 retry.go:31] will retry after 483.256928ms: waiting for machine to come up
	I1028 18:28:43.253739   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254304   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254353   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.254220   68341 retry.go:31] will retry after 577.932805ms: waiting for machine to come up
	I1028 18:28:43.834355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.834976   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.835021   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.834945   68341 retry.go:31] will retry after 639.531065ms: waiting for machine to come up
	I1028 18:28:44.475727   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476299   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476331   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:44.476248   68341 retry.go:31] will retry after 1.171398436s: waiting for machine to come up
	I1028 18:28:43.473059   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.473096   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.473113   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.588338   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.588371   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.913612   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.918557   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:43.918598   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.412902   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.425930   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.425971   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.913482   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.926092   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.926126   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:45.413673   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:45.419384   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:28:45.430384   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:28:45.430431   66801 api_server.go:131] duration metric: took 5.517694037s to wait for apiserver health ...
	I1028 18:28:45.430442   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:45.430450   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:45.432587   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:28:41.906005   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:41.909278   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909683   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:41.909741   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909996   67149 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:41.915405   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:41.931747   67149 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:41.931886   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:28:41.931944   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:41.987909   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:41.987966   67149 ssh_runner.go:195] Run: which lz4
	I1028 18:28:41.993527   67149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:28:41.998982   67149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:28:41.999014   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:28:43.643480   67149 crio.go:462] duration metric: took 1.649982959s to copy over tarball
	I1028 18:28:43.643559   67149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:28:45.433946   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:28:45.453114   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:28:45.479255   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:28:45.497020   66801 system_pods.go:59] 8 kube-system pods found
	I1028 18:28:45.497072   66801 system_pods.go:61] "coredns-7c65d6cfc9-74b6t" [b6a550da-7c40-4283-b49e-1ab29e652037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:28:45.497084   66801 system_pods.go:61] "etcd-no-preload-051152" [d5b31ded-95ce-4dde-ba88-e653dfdb8d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:28:45.497097   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [95d0acb0-4d58-4307-9f4f-10f920ff4745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:28:45.497105   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [722530e1-1d76-40dc-8a24-fe79d0167835] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:28:45.497112   66801 system_pods.go:61] "kube-proxy-kg42f" [7891354b-a501-45c4-b15c-cf6d29e3721f] Running
	I1028 18:28:45.497121   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [c658808c-79c2-4b8e-b72c-0b2d8e058ab4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:28:45.497130   66801 system_pods.go:61] "metrics-server-6867b74b74-vgd8k" [626b71a2-6904-409f-9274-6963a94e6ac2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:28:45.497137   66801 system_pods.go:61] "storage-provisioner" [39bf84c9-9c6f-4048-8a11-460fb12f622b] Running
	I1028 18:28:45.497146   66801 system_pods.go:74] duration metric: took 17.863894ms to wait for pod list to return data ...
	I1028 18:28:45.497160   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:28:45.501945   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:28:45.501977   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:28:45.501993   66801 node_conditions.go:105] duration metric: took 4.827279ms to run NodePressure ...
	I1028 18:28:45.502014   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:45.835429   66801 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840823   66801 kubeadm.go:739] kubelet initialised
	I1028 18:28:45.840852   66801 kubeadm.go:740] duration metric: took 5.391212ms waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840862   66801 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:28:45.846565   66801 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:45.648994   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649559   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649587   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:45.649512   68341 retry.go:31] will retry after 1.258585317s: waiting for machine to come up
	I1028 18:28:46.909541   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909955   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:46.909911   68341 retry.go:31] will retry after 1.827150306s: waiting for machine to come up
	I1028 18:28:48.738193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738696   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738725   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:48.738653   68341 retry.go:31] will retry after 1.738249889s: waiting for machine to come up
	I1028 18:28:46.758767   67149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115173801s)
	I1028 18:28:46.758810   67149 crio.go:469] duration metric: took 3.115300284s to extract the tarball
	I1028 18:28:46.758821   67149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:28:46.816906   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:46.864347   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:46.864376   67149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:46.864499   67149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.864564   67149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.864623   67149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.864639   67149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.864674   67149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.864686   67149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:28:46.864710   67149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.864529   67149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:46.866383   67149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.866445   67149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.866493   67149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.866579   67149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.866795   67149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.867073   67149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.867095   67149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:28:46.867488   67149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.043358   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.053844   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.055684   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.056812   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.066211   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.090931   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.104900   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:28:47.141214   67149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:28:47.141260   67149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.141307   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202804   67149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:28:47.202863   67149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.202873   67149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:28:47.202903   67149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.202915   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202944   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.234811   67149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:28:47.234853   67149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.234900   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.236717   67149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:28:47.236751   67149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.236798   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.243872   67149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:28:47.243918   67149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.243971   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260210   67149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:28:47.260253   67149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:28:47.260256   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.260293   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260398   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.260438   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.260456   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.260517   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.260559   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413617   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.413776   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.413804   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413825   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.414063   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.414103   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.414150   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.544933   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.581577   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.582079   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.582161   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.582206   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.582344   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.582819   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.662237   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:28:47.736212   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.739757   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:28:47.739928   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:28:47.739802   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:28:47.739812   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:28:47.739841   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:28:47.783578   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:28:49.121698   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:49.266583   67149 cache_images.go:92] duration metric: took 2.402188013s to LoadCachedImages
	W1028 18:28:49.266686   67149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 18:28:49.266702   67149 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:28:49.266828   67149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:49.266918   67149 ssh_runner.go:195] Run: crio config
	I1028 18:28:49.318146   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:28:49.318167   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:49.318176   67149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:49.318193   67149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:28:49.318310   67149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:49.318371   67149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:28:49.329249   67149 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:49.329339   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:49.339379   67149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:28:49.359216   67149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:49.378289   67149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:28:49.397766   67149 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:49.401788   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:49.418204   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:49.558031   67149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:49.575443   67149 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:28:49.575469   67149 certs.go:194] generating shared ca certs ...
	I1028 18:28:49.575489   67149 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:49.575693   67149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:49.575746   67149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:49.575756   67149 certs.go:256] generating profile certs ...
	I1028 18:28:49.575859   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:28:49.575914   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:28:49.575951   67149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:28:49.576058   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:49.576092   67149 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:49.576103   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:49.576131   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:49.576162   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:49.576186   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:49.576238   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:49.576994   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:49.622814   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:49.653690   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:49.678975   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:49.707340   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:28:49.744836   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:28:49.776367   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:49.818999   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:28:49.847531   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:49.871924   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:49.897751   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:49.923267   67149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:49.939805   67149 ssh_runner.go:195] Run: openssl version
	I1028 18:28:49.945611   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:49.956191   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960862   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960916   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.966701   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:49.977882   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:49.990873   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995751   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995810   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:50.001891   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:50.013508   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:50.028132   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034144   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034217   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.041768   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:50.054079   67149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:50.058983   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:50.064802   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:50.070790   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:50.077090   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:50.083149   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:50.089232   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:50.095205   67149 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:50.095338   67149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:50.095411   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.139777   67149 cri.go:89] found id: ""
	I1028 18:28:50.139854   67149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:50.151967   67149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:50.151986   67149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:50.152040   67149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:50.163454   67149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:50.164876   67149 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:28:50.165798   67149 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-223868" cluster setting kubeconfig missing "old-k8s-version-223868" context setting]
	I1028 18:28:50.167121   67149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:50.169545   67149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:50.179447   67149 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.194
	I1028 18:28:50.179477   67149 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:50.179490   67149 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:50.179542   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.213891   67149 cri.go:89] found id: ""
	I1028 18:28:50.213963   67149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:50.231491   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:50.241752   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:50.241775   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:50.241829   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:50.252015   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:50.252075   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:50.263032   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:50.273500   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:50.273564   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:50.283603   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.293521   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:50.293567   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.303701   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:50.316202   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:50.316269   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:50.327841   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:50.341366   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:50.469586   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:49.414188   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:51.855115   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:50.478658   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479208   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479237   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:50.479151   68341 retry.go:31] will retry after 2.362711935s: waiting for machine to come up
	I1028 18:28:52.842907   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843290   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843314   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:52.843250   68341 retry.go:31] will retry after 2.561710525s: waiting for machine to come up
	I1028 18:28:51.507608   67149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037983659s)
	I1028 18:28:51.507645   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.733141   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.842228   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.947336   67149 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:51.947430   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.447618   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.947814   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.448476   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.947571   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.448371   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.947700   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.447735   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.948435   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.857886   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:54.862972   66801 pod_ready.go:93] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:54.863005   66801 pod_ready.go:82] duration metric: took 9.016413449s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:54.863019   66801 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869043   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:55.869076   66801 pod_ready.go:82] duration metric: took 1.006049217s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869091   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874842   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.874865   66801 pod_ready.go:82] duration metric: took 2.005766936s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874875   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878913   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.878930   66801 pod_ready.go:82] duration metric: took 4.049698ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878937   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889897   66801 pod_ready.go:93] pod "kube-proxy-kg42f" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.889913   66801 pod_ready.go:82] duration metric: took 10.971269ms for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889921   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.407934   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408336   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408362   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:55.408274   68341 retry.go:31] will retry after 3.762790995s: waiting for machine to come up
	I1028 18:28:59.173489   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173900   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Found IP for machine: 192.168.39.215
	I1028 18:28:59.173923   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has current primary IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173929   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserving static IP address...
	I1028 18:28:59.174320   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.174343   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | skip adding static IP to network mk-default-k8s-diff-port-692033 - found existing host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"}
	I1028 18:28:59.174355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserved static IP address: 192.168.39.215
	I1028 18:28:59.174365   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for SSH to be available...
	I1028 18:28:59.174376   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Getting to WaitForSSH function...
	I1028 18:28:59.176441   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176755   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.176786   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176913   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH client type: external
	I1028 18:28:59.176936   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa (-rw-------)
	I1028 18:28:59.176958   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:59.176970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | About to run SSH command:
	I1028 18:28:59.176982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | exit 0
	I1028 18:28:59.300272   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:59.300649   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetConfigRaw
	I1028 18:28:59.301261   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.303505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.303832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.303857   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.304080   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:28:59.304287   67489 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:59.304310   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:59.304535   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.306713   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307008   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.307042   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307187   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.307348   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307627   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.307768   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.307936   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.307946   67489 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:59.412710   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:59.412743   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413009   67489 buildroot.go:166] provisioning hostname "default-k8s-diff-port-692033"
	I1028 18:28:59.413041   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.415772   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416048   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.416070   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416251   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.416437   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416728   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.416847   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.417030   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.417041   67489 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-692033 && echo "default-k8s-diff-port-692033" | sudo tee /etc/hostname
	I1028 18:28:59.538491   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-692033
	
	I1028 18:28:59.538518   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.540842   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541144   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.541173   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.541527   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541684   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541815   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.541964   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.542123   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.542138   67489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-692033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-692033/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-692033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:59.657448   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:59.657480   67489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:59.657524   67489 buildroot.go:174] setting up certificates
	I1028 18:28:59.657539   67489 provision.go:84] configureAuth start
	I1028 18:28:59.657556   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.657832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.660465   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660797   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.660840   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660949   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.663393   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663801   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.663830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663977   67489 provision.go:143] copyHostCerts
	I1028 18:28:59.664049   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:59.664062   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:59.664117   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:59.664217   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:59.664228   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:59.664250   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:59.664300   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:59.664308   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:59.664327   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:59.664403   67489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-692033 san=[127.0.0.1 192.168.39.215 default-k8s-diff-port-692033 localhost minikube]
	I1028 18:28:59.882619   67489 provision.go:177] copyRemoteCerts
	I1028 18:28:59.882672   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:59.882695   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.885303   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.885686   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885927   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.886121   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.886278   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.886382   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:28:59.975231   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:00.000412   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 18:29:00.024424   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:29:00.048646   67489 provision.go:87] duration metric: took 391.090444ms to configureAuth
	I1028 18:29:00.048674   67489 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:00.048884   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:00.048970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.051793   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052156   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.052185   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.052532   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052729   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052894   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.053080   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.053241   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.053254   67489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:00.525285   66600 start.go:364] duration metric: took 54.917560334s to acquireMachinesLock for "embed-certs-021370"
	I1028 18:29:00.525349   66600 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:29:00.525359   66600 fix.go:54] fixHost starting: 
	I1028 18:29:00.525740   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:29:00.525778   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:29:00.544614   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I1028 18:29:00.544976   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:29:00.545433   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:29:00.545455   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:29:00.545842   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:29:00.546046   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:00.546230   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:29:00.547770   66600 fix.go:112] recreateIfNeeded on embed-certs-021370: state=Stopped err=<nil>
	I1028 18:29:00.547794   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	W1028 18:29:00.547957   66600 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:29:00.549753   66600 out.go:177] * Restarting existing kvm2 VM for "embed-certs-021370" ...
	I1028 18:28:56.447531   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.947711   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.447782   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.947642   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.948256   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.447558   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.948018   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.448186   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.947565   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.280618   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:00.280641   67489 machine.go:96] duration metric: took 976.341252ms to provisionDockerMachine
	I1028 18:29:00.280653   67489 start.go:293] postStartSetup for "default-k8s-diff-port-692033" (driver="kvm2")
	I1028 18:29:00.280669   67489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:00.280690   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.281004   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:00.281044   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.283656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.283977   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.284010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.284170   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.284382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.284549   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.284692   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.372947   67489 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:00.377456   67489 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:00.377480   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:00.377547   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:00.377646   67489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:00.377762   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:00.388767   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:00.413520   67489 start.go:296] duration metric: took 132.852709ms for postStartSetup
	I1028 18:29:00.413557   67489 fix.go:56] duration metric: took 19.992127182s for fixHost
	I1028 18:29:00.413578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.416040   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416377   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.416405   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416553   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.416756   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.416930   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.417065   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.417228   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.417412   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.417424   67489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:00.525082   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140140.492840769
	
	I1028 18:29:00.525105   67489 fix.go:216] guest clock: 1730140140.492840769
	I1028 18:29:00.525114   67489 fix.go:229] Guest: 2024-10-28 18:29:00.492840769 +0000 UTC Remote: 2024-10-28 18:29:00.413561948 +0000 UTC m=+205.301669628 (delta=79.278821ms)
	I1028 18:29:00.525169   67489 fix.go:200] guest clock delta is within tolerance: 79.278821ms
	I1028 18:29:00.525180   67489 start.go:83] releasing machines lock for "default-k8s-diff-port-692033", held for 20.103791447s
	I1028 18:29:00.525214   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.525495   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:00.528023   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528385   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.528415   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529038   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529287   67489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:00.529323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.529380   67489 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:00.529403   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.531822   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532022   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532163   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532294   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532443   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532481   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532488   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532612   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532680   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.532830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532830   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.532965   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.533103   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.609362   67489 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:00.636444   67489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:00.785916   67489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:00.792198   67489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:00.792279   67489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:00.812095   67489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:00.812124   67489 start.go:495] detecting cgroup driver to use...
	I1028 18:29:00.812190   67489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:00.829536   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:00.844021   67489 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:00.844090   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:00.858561   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:00.873128   67489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:00.990494   67489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:01.148650   67489 docker.go:233] disabling docker service ...
	I1028 18:29:01.148729   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:01.162487   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:01.177407   67489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:01.303665   67489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:01.430019   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:01.443822   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:01.462768   67489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:01.462830   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.473669   67489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:01.473737   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.484364   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.496220   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.507216   67489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:01.518848   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.534216   67489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.554294   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.565095   67489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:01.574547   67489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:01.574614   67489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:01.596531   67489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:01.606858   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:01.740272   67489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:01.844969   67489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:01.845053   67489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:01.850004   67489 start.go:563] Will wait 60s for crictl version
	I1028 18:29:01.850056   67489 ssh_runner.go:195] Run: which crictl
	I1028 18:29:01.854032   67489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:01.893281   67489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:01.893367   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.923557   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.956282   67489 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:00.551001   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Start
	I1028 18:29:00.551172   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring networks are active...
	I1028 18:29:00.551820   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network default is active
	I1028 18:29:00.552130   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network mk-embed-certs-021370 is active
	I1028 18:29:00.552482   66600 main.go:141] libmachine: (embed-certs-021370) Getting domain xml...
	I1028 18:29:00.553186   66600 main.go:141] libmachine: (embed-certs-021370) Creating domain...
	I1028 18:29:01.830016   66600 main.go:141] libmachine: (embed-certs-021370) Waiting to get IP...
	I1028 18:29:01.831046   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:01.831522   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:01.831630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:01.831518   68528 retry.go:31] will retry after 300.306268ms: waiting for machine to come up
	I1028 18:29:02.132901   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.133350   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.133383   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.133293   68528 retry.go:31] will retry after 383.232008ms: waiting for machine to come up
	I1028 18:29:02.518736   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.519274   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.519299   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.519241   68528 retry.go:31] will retry after 354.591942ms: waiting for machine to come up
	I1028 18:29:02.875813   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.876360   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.876397   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.876325   68528 retry.go:31] will retry after 529.444037ms: waiting for machine to come up
	I1028 18:28:58.895888   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:58.895918   66801 pod_ready.go:82] duration metric: took 1.005990705s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:58.895932   66801 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:00.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:02.903390   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:01.957748   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:01.960967   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:01.961382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961635   67489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:01.966300   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:01.979786   67489 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:01.979899   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:01.979957   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:02.020659   67489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:02.020716   67489 ssh_runner.go:195] Run: which lz4
	I1028 18:29:02.024772   67489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:02.030183   67489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:02.030206   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:03.449423   67489 crio.go:462] duration metric: took 1.424673911s to copy over tarball
	I1028 18:29:03.449498   67489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:01.447557   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.947946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.448522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.947533   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.447522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.948025   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.448136   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.948157   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.447635   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.947987   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.407835   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:03.408366   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:03.408390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:03.408265   68528 retry.go:31] will retry after 680.005296ms: waiting for machine to come up
	I1028 18:29:04.089802   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.090390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.090409   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.090338   68528 retry.go:31] will retry after 833.681725ms: waiting for machine to come up
	I1028 18:29:04.925788   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.926278   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.926298   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.926227   68528 retry.go:31] will retry after 1.050194845s: waiting for machine to come up
	I1028 18:29:05.978270   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:05.978715   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:05.978742   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:05.978669   68528 retry.go:31] will retry after 1.416773018s: waiting for machine to come up
	I1028 18:29:07.397367   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:07.397843   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:07.397876   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:07.397787   68528 retry.go:31] will retry after 1.621623459s: waiting for machine to come up
	I1028 18:29:04.903465   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:06.903931   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:05.622217   67489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172685001s)
	I1028 18:29:05.622253   67489 crio.go:469] duration metric: took 2.172801769s to extract the tarball
	I1028 18:29:05.622264   67489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:05.660585   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:05.705484   67489 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:05.705510   67489 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:05.705520   67489 kubeadm.go:934] updating node { 192.168.39.215 8444 v1.31.2 crio true true} ...
	I1028 18:29:05.705634   67489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-692033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:05.705725   67489 ssh_runner.go:195] Run: crio config
	I1028 18:29:05.760618   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:05.760649   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:05.760661   67489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:05.760690   67489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-692033 NodeName:default-k8s-diff-port-692033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:05.760858   67489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-692033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:05.760936   67489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:05.771392   67489 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:05.771464   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:05.780926   67489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1028 18:29:05.797951   67489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:05.814159   67489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1028 18:29:05.830723   67489 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:05.835163   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:05.847192   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:05.972201   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:05.990475   67489 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033 for IP: 192.168.39.215
	I1028 18:29:05.990492   67489 certs.go:194] generating shared ca certs ...
	I1028 18:29:05.990511   67489 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:05.990711   67489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:05.990764   67489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:05.990776   67489 certs.go:256] generating profile certs ...
	I1028 18:29:05.990875   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.key
	I1028 18:29:05.990991   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key.81b9981a
	I1028 18:29:05.991052   67489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key
	I1028 18:29:05.991218   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:05.991268   67489 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:05.991283   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:05.991317   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:05.991359   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:05.991405   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:05.991481   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:05.992294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:06.033938   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:06.070407   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:06.115934   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:06.144600   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 18:29:06.169202   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:06.196294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:06.219384   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:29:06.242169   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:06.266506   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:06.290175   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:06.313006   67489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:06.329076   67489 ssh_runner.go:195] Run: openssl version
	I1028 18:29:06.335322   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:06.346021   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350401   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350464   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.356134   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:06.366765   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:06.377486   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381920   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381978   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.387492   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:06.398392   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:06.413238   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418376   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418429   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.423997   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:06.436170   67489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:06.440853   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:06.446851   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:06.452980   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:06.458973   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:06.465088   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:06.470776   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:06.476462   67489 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:06.476588   67489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:06.476638   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.519820   67489 cri.go:89] found id: ""
	I1028 18:29:06.519884   67489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:06.530091   67489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:06.530110   67489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:06.530171   67489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:06.539807   67489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:06.540946   67489 kubeconfig.go:125] found "default-k8s-diff-port-692033" server: "https://192.168.39.215:8444"
	I1028 18:29:06.543088   67489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:06.552354   67489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I1028 18:29:06.552379   67489 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:06.552389   67489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:06.552445   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.586545   67489 cri.go:89] found id: ""
	I1028 18:29:06.586611   67489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:06.603418   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:06.612856   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:06.612876   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:06.612921   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:29:06.621852   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:06.621900   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:06.631132   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:29:06.640088   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:06.640158   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:06.651007   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.660034   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:06.660104   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.669587   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:29:06.678863   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:06.678937   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:06.688820   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:06.698470   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:06.820432   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.030810   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210339958s)
	I1028 18:29:08.030839   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.255000   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.321500   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.412775   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:08.412854   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.913648   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.413011   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.459009   67489 api_server.go:72] duration metric: took 1.046232596s to wait for apiserver process to appear ...
	I1028 18:29:09.459041   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:09.459062   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:09.459626   67489 api_server.go:269] stopped: https://192.168.39.215:8444/healthz: Get "https://192.168.39.215:8444/healthz": dial tcp 192.168.39.215:8444: connect: connection refused
	I1028 18:29:09.960128   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:06.447581   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.947550   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.447977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.947491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.447960   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.947662   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.448201   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.947753   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.448116   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.948175   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.020419   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:09.020867   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:09.020899   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:09.020814   68528 retry.go:31] will retry after 2.2230034s: waiting for machine to come up
	I1028 18:29:11.245136   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:11.245630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:11.245657   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:11.245595   68528 retry.go:31] will retry after 2.153898764s: waiting for machine to come up
	I1028 18:29:09.403596   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:11.903702   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:12.135346   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.135381   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.135394   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.166207   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.166234   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.459631   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.473153   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.473183   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:12.959778   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.969281   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.969320   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:13.459913   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:13.464362   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:29:13.471925   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:13.471953   67489 api_server.go:131] duration metric: took 4.012904227s to wait for apiserver health ...
	I1028 18:29:13.471964   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:13.471971   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:13.473908   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:13.475283   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:13.487393   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:13.532627   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:13.544945   67489 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:13.544982   67489 system_pods.go:61] "coredns-7c65d6cfc9-ctx9z" [7067f349-3a22-468d-bd9d-19d057eb43f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:13.544993   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [313161ff-f30f-4e25-978d-9aa2eba7fc44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:13.545004   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [e9a66e8e-946b-4365-bd63-3adfdd75e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:13.545014   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [0e682f68-2f9a-4bf3-bbe4-3a6b1ef6778d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:13.545021   67489 system_pods.go:61] "kube-proxy-86rll" [d34f46c6-3227-40c9-ac97-066b98bfce32] Running
	I1028 18:29:13.545029   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [b9058969-31e2-4249-862f-ef5de7784adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:13.545043   67489 system_pods.go:61] "metrics-server-6867b74b74-dz4nl" [833c650e-5f5d-46a1-9ae1-64619c53a92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:13.545047   67489 system_pods.go:61] "storage-provisioner" [342db8fa-7873-47b0-a5a6-52cde2e19d47] Running
	I1028 18:29:13.545053   67489 system_pods.go:74] duration metric: took 12.403166ms to wait for pod list to return data ...
	I1028 18:29:13.545060   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:13.548591   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:13.548619   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:13.548632   67489 node_conditions.go:105] duration metric: took 3.567222ms to run NodePressure ...
	I1028 18:29:13.548649   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:13.818718   67489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826139   67489 kubeadm.go:739] kubelet initialised
	I1028 18:29:13.826161   67489 kubeadm.go:740] duration metric: took 7.415257ms waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826170   67489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:13.833418   67489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.838793   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838820   67489 pod_ready.go:82] duration metric: took 5.377698ms for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.838831   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838840   67489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.843172   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843195   67489 pod_ready.go:82] duration metric: took 4.34633ms for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.843203   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843209   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.847581   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847615   67489 pod_ready.go:82] duration metric: took 4.389898ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.847630   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847642   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:11.448521   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.947592   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.448427   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.948413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.448390   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.948518   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.447929   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.948106   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.948236   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.401547   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:13.402054   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:13.402083   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:13.402028   68528 retry.go:31] will retry after 2.345507901s: waiting for machine to come up
	I1028 18:29:15.749122   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:15.749485   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:15.749502   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:15.749451   68528 retry.go:31] will retry after 2.974576274s: waiting for machine to come up
	I1028 18:29:13.903930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.403934   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:15.858338   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:18.354245   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.447535   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.948117   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.448197   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.948491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.948393   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.448406   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.947788   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.448100   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.947907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.727508   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.727990   66600 main.go:141] libmachine: (embed-certs-021370) Found IP for machine: 192.168.50.62
	I1028 18:29:18.728011   66600 main.go:141] libmachine: (embed-certs-021370) Reserving static IP address...
	I1028 18:29:18.728028   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has current primary IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.728447   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.728478   66600 main.go:141] libmachine: (embed-certs-021370) Reserved static IP address: 192.168.50.62
	I1028 18:29:18.728497   66600 main.go:141] libmachine: (embed-certs-021370) DBG | skip adding static IP to network mk-embed-certs-021370 - found existing host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"}
	I1028 18:29:18.728510   66600 main.go:141] libmachine: (embed-certs-021370) Waiting for SSH to be available...
	I1028 18:29:18.728520   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Getting to WaitForSSH function...
	I1028 18:29:18.730574   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731031   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.731069   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731227   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH client type: external
	I1028 18:29:18.731248   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa (-rw-------)
	I1028 18:29:18.731282   66600 main.go:141] libmachine: (embed-certs-021370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:29:18.731310   66600 main.go:141] libmachine: (embed-certs-021370) DBG | About to run SSH command:
	I1028 18:29:18.731327   66600 main.go:141] libmachine: (embed-certs-021370) DBG | exit 0
	I1028 18:29:18.860213   66600 main.go:141] libmachine: (embed-certs-021370) DBG | SSH cmd err, output: <nil>: 
	I1028 18:29:18.860619   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetConfigRaw
	I1028 18:29:18.861235   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:18.863576   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.863932   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.863956   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.864224   66600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/config.json ...
	I1028 18:29:18.864465   66600 machine.go:93] provisionDockerMachine start ...
	I1028 18:29:18.864521   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:18.864720   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.866951   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867314   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.867349   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867511   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.867665   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867811   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867941   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.868072   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.868230   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.868239   66600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:29:18.972695   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:29:18.972729   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.972970   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:29:18.973000   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.973209   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.975608   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.975889   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.975915   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.976082   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.976269   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976401   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976505   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.976625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.976796   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.976809   66600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-021370 && echo "embed-certs-021370" | sudo tee /etc/hostname
	I1028 18:29:19.094622   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-021370
	
	I1028 18:29:19.094655   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.097110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097436   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.097460   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097639   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.097817   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.097967   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.098121   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.098309   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.098517   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.098533   66600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-021370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-021370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-021370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:29:19.218088   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:29:19.218112   66600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:29:19.218140   66600 buildroot.go:174] setting up certificates
	I1028 18:29:19.218150   66600 provision.go:84] configureAuth start
	I1028 18:29:19.218159   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:19.218411   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:19.221093   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221441   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.221469   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221641   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.223628   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.223908   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.223928   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.224085   66600 provision.go:143] copyHostCerts
	I1028 18:29:19.224155   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:29:19.224185   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:29:19.224252   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:29:19.224380   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:29:19.224390   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:29:19.224422   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:29:19.224532   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:29:19.224542   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:29:19.224570   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:29:19.224655   66600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-021370 san=[127.0.0.1 192.168.50.62 embed-certs-021370 localhost minikube]
	I1028 18:29:19.402860   66600 provision.go:177] copyRemoteCerts
	I1028 18:29:19.402925   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:29:19.402954   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.405556   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.405904   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.405939   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.406100   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.406265   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.406391   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.406494   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.486543   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:19.510790   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:29:19.534037   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:29:19.557509   66600 provision.go:87] duration metric: took 339.349044ms to configureAuth
	I1028 18:29:19.557531   66600 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:19.557681   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:19.557745   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.560240   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560594   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.560623   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560757   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.560931   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561110   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561320   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.561490   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.561651   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.561664   66600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:19.781270   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:19.781304   66600 machine.go:96] duration metric: took 916.814114ms to provisionDockerMachine
	I1028 18:29:19.781317   66600 start.go:293] postStartSetup for "embed-certs-021370" (driver="kvm2")
	I1028 18:29:19.781327   66600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:19.781345   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:19.781664   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:19.781690   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.784176   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784509   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.784538   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784667   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.784854   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.785028   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.785171   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.867396   66600 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:19.871516   66600 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:19.871542   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:19.871630   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:19.871717   66600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:19.871799   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:19.882017   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:19.906531   66600 start.go:296] duration metric: took 125.203636ms for postStartSetup
	I1028 18:29:19.906562   66600 fix.go:56] duration metric: took 19.381205641s for fixHost
	I1028 18:29:19.906581   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.909285   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909610   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.909640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909778   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.909980   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910311   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910444   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.910625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.910788   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.910803   66600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:20.017311   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140159.989127147
	
	I1028 18:29:20.017339   66600 fix.go:216] guest clock: 1730140159.989127147
	I1028 18:29:20.017346   66600 fix.go:229] Guest: 2024-10-28 18:29:19.989127147 +0000 UTC Remote: 2024-10-28 18:29:19.906566181 +0000 UTC m=+356.890524496 (delta=82.560966ms)
	I1028 18:29:20.017368   66600 fix.go:200] guest clock delta is within tolerance: 82.560966ms
	I1028 18:29:20.017374   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 19.492049852s
	I1028 18:29:20.017396   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.017657   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:20.020286   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020680   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.020704   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020816   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021307   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021491   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021577   66600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:20.021616   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.021746   66600 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:20.021767   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.024157   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024429   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024511   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024533   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024679   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.024856   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.024880   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024896   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.025019   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025070   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.025160   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.025201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.025304   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025443   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.101316   66600 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:20.124859   66600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:20.268773   66600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:20.275277   66600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:20.275358   66600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:20.291972   66600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:20.291999   66600 start.go:495] detecting cgroup driver to use...
	I1028 18:29:20.292066   66600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:20.311389   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:20.325385   66600 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:20.325434   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:20.339246   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:20.353759   66600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:20.477639   66600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:20.622752   66600 docker.go:233] disabling docker service ...
	I1028 18:29:20.622825   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:20.637258   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:20.650210   66600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:20.801036   66600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:20.945078   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:20.959494   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:20.977797   66600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:20.977854   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.987991   66600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:20.988038   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.998188   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.008502   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.018540   66600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:21.028663   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.038758   66600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.056298   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.067136   66600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:21.076859   66600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:21.076906   66600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:21.090468   66600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:21.099951   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:21.226675   66600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:21.321993   66600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:21.322074   66600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:21.327981   66600 start.go:563] Will wait 60s for crictl version
	I1028 18:29:21.328028   66600 ssh_runner.go:195] Run: which crictl
	I1028 18:29:21.331673   66600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:21.369066   66600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:21.369168   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.396873   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.426233   66600 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:21.427570   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:21.430207   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430560   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:21.430582   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430732   66600 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:21.435293   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:21.447885   66600 kubeadm.go:883] updating cluster {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:21.447989   66600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:21.448067   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:21.488401   66600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:21.488488   66600 ssh_runner.go:195] Run: which lz4
	I1028 18:29:21.492578   66600 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:21.496531   66600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:21.496560   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:22.824198   66600 crio.go:462] duration metric: took 1.331643546s to copy over tarball
	I1028 18:29:22.824276   66600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:18.902233   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.902721   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.904121   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.354850   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.355961   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:24.854445   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:21.447903   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.948305   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.448529   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.947708   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.447881   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.947572   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.448433   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.948299   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.447748   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.947863   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.906928   66600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082617931s)
	I1028 18:29:24.906959   66600 crio.go:469] duration metric: took 2.082732511s to extract the tarball
	I1028 18:29:24.906968   66600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:24.944094   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:24.991024   66600 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:24.991048   66600 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:24.991057   66600 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.31.2 crio true true} ...
	I1028 18:29:24.991175   66600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-021370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:24.991262   66600 ssh_runner.go:195] Run: crio config
	I1028 18:29:25.034609   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:25.034629   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:25.034639   66600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:25.034657   66600 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-021370 NodeName:embed-certs-021370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:25.034803   66600 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-021370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:25.034858   66600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:25.044587   66600 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:25.044661   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:25.054150   66600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 18:29:25.070100   66600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:25.085866   66600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1028 18:29:25.101932   66600 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:25.105817   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:25.117399   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:25.235698   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:25.251517   66600 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370 for IP: 192.168.50.62
	I1028 18:29:25.251536   66600 certs.go:194] generating shared ca certs ...
	I1028 18:29:25.251549   66600 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:25.251701   66600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:25.251758   66600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:25.251771   66600 certs.go:256] generating profile certs ...
	I1028 18:29:25.251871   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/client.key
	I1028 18:29:25.251951   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key.1a2ee1e7
	I1028 18:29:25.252010   66600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key
	I1028 18:29:25.252184   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:25.252213   66600 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:25.252222   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:25.252246   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:25.252271   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:25.252291   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:25.252328   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:25.252968   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:25.280370   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:25.323757   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:25.356813   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:25.395729   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 18:29:25.428768   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:25.459929   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:25.485206   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:29:25.514312   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:25.537007   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:25.559926   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:25.582419   66600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:25.599284   66600 ssh_runner.go:195] Run: openssl version
	I1028 18:29:25.605132   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:25.615576   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619856   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619911   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.625516   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:25.636185   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:25.646664   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650958   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650998   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.657176   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:25.668490   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:25.679608   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.683993   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.684041   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.689729   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:25.700817   66600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:25.705214   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:25.711351   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:25.717172   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:25.722879   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:25.728415   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:25.733859   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:25.739422   66600 kubeadm.go:392] StartCluster: {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:25.739492   66600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:25.739534   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.779869   66600 cri.go:89] found id: ""
	I1028 18:29:25.779926   66600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:25.790753   66600 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:25.790771   66600 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:25.790811   66600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:25.800588   66600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:25.801624   66600 kubeconfig.go:125] found "embed-certs-021370" server: "https://192.168.50.62:8443"
	I1028 18:29:25.803466   66600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:25.813212   66600 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I1028 18:29:25.813240   66600 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:25.813254   66600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:25.813312   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.848911   66600 cri.go:89] found id: ""
	I1028 18:29:25.848976   66600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:25.866165   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:25.876454   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:25.876485   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:25.876539   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:29:25.886746   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:25.886802   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:25.897486   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:29:25.907828   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:25.907881   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:25.917520   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.926896   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:25.926950   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.937184   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:29:25.946539   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:25.946585   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:25.956520   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:25.968541   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:26.077716   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.298743   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.220990469s)
	I1028 18:29:27.298777   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.517286   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.582890   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.648091   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:27.648159   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.402969   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:27.405049   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.356621   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.356642   67489 pod_ready.go:82] duration metric: took 12.508989427s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.356653   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361609   67489 pod_ready.go:93] pod "kube-proxy-86rll" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.361627   67489 pod_ready.go:82] duration metric: took 4.968039ms for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361635   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365430   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.365449   67489 pod_ready.go:82] duration metric: took 3.807327ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365460   67489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:28.373442   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.448386   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.948082   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.447496   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.948285   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.947683   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.447813   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.947810   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.448413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.947477   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.148668   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.648320   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.148392   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.648218   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.682858   66600 api_server.go:72] duration metric: took 2.034774456s to wait for apiserver process to appear ...
	I1028 18:29:29.682888   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:29.682915   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:29.683457   66600 api_server.go:269] stopped: https://192.168.50.62:8443/healthz: Get "https://192.168.50.62:8443/healthz": dial tcp 192.168.50.62:8443: connect: connection refused
	I1028 18:29:30.182997   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.878280   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.878304   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:32.878318   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.942789   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.942828   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:29.903158   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:32.404024   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.183344   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.187337   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.187362   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:33.683288   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.687653   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.687680   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:34.183190   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:34.187671   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:29:34.195909   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:34.195938   66600 api_server.go:131] duration metric: took 4.51303648s to wait for apiserver health ...
	I1028 18:29:34.195950   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:34.195959   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:34.197469   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:30.872450   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.372710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:31.448099   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.948269   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.447660   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.947559   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.447716   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.948569   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.447555   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.947612   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.448411   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.947786   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.198803   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:34.221645   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:34.250694   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:34.261167   66600 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:34.261211   66600 system_pods.go:61] "coredns-7c65d6cfc9-bdtd8" [e1fff57c-ba57-4592-9049-7cc80a6f67a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:34.261229   66600 system_pods.go:61] "etcd-embed-certs-021370" [0c805e30-b6d8-416c-97af-c33b142b46e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:34.261240   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [244e08f7-7e8c-4547-b145-9816374fe582] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:34.261251   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [c08dc68e-d441-4d96-8377-957c381c4ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:34.261265   66600 system_pods.go:61] "kube-proxy-7g7lr" [828a4297-7703-46a7-bffe-c8daf83ef4bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:29:34.261277   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [2bc3fea6-0f01-43e9-b69e-deb26980e658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:34.261286   66600 system_pods.go:61] "metrics-server-6867b74b74-gg8bl" [599d8cf3-717d-46b2-a5ba-43e00f46829b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:34.261296   66600 system_pods.go:61] "storage-provisioner" [ad047e20-2de9-447c-83bc-8b835292a25f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:29:34.261307   66600 system_pods.go:74] duration metric: took 10.589505ms to wait for pod list to return data ...
	I1028 18:29:34.261319   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:34.265041   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:34.265066   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:34.265079   66600 node_conditions.go:105] duration metric: took 3.75485ms to run NodePressure ...
	I1028 18:29:34.265098   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:34.567509   66600 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571573   66600 kubeadm.go:739] kubelet initialised
	I1028 18:29:34.571592   66600 kubeadm.go:740] duration metric: took 4.056877ms waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571599   66600 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:34.576872   66600 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:36.586357   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:34.901383   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.902526   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:35.871154   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:37.873138   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.447566   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.947886   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.448276   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.948547   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.447546   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.947974   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.448334   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.948183   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.448396   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.947620   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.083269   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.083414   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:41.083443   66600 pod_ready.go:82] duration metric: took 6.506548177s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:41.083453   66600 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:39.401480   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.402426   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:40.370529   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:42.371580   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:44.372259   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.448306   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.947486   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.448219   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.948295   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.447765   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.947468   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.448454   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.947488   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.447568   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.948070   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.089927   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.589484   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.594775   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:43.403246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.403595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.902160   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.872441   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.371650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.448123   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.948178   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.447989   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.947888   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.448230   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.947692   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.448090   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.947996   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.447949   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.947977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.089584   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.089607   66600 pod_ready.go:82] duration metric: took 7.006147079s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.089619   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093940   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.093959   66600 pod_ready.go:82] duration metric: took 4.332474ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093969   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098279   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.098295   66600 pod_ready.go:82] duration metric: took 4.319206ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098304   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102326   66600 pod_ready.go:93] pod "kube-proxy-7g7lr" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.102341   66600 pod_ready.go:82] duration metric: took 4.03162ms for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102349   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106249   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.106265   66600 pod_ready.go:82] duration metric: took 3.910208ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106279   66600 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:50.112678   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:52.113794   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.902296   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.902424   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.371741   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:53.371833   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.448130   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.948450   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:51.948545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:51.987428   67149 cri.go:89] found id: ""
	I1028 18:29:51.987459   67149 logs.go:282] 0 containers: []
	W1028 18:29:51.987470   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:51.987478   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:51.987534   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:52.021429   67149 cri.go:89] found id: ""
	I1028 18:29:52.021452   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.021460   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:52.021466   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:52.021509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:52.055338   67149 cri.go:89] found id: ""
	I1028 18:29:52.055362   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.055373   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:52.055380   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:52.055432   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:52.088673   67149 cri.go:89] found id: ""
	I1028 18:29:52.088697   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.088705   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:52.088711   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:52.088766   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:52.129833   67149 cri.go:89] found id: ""
	I1028 18:29:52.129854   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.129862   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:52.129867   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:52.129918   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:52.162994   67149 cri.go:89] found id: ""
	I1028 18:29:52.163029   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.163040   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:52.163047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:52.163105   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:52.196819   67149 cri.go:89] found id: ""
	I1028 18:29:52.196840   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.196848   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:52.196853   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:52.196906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:52.232924   67149 cri.go:89] found id: ""
	I1028 18:29:52.232955   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.232965   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:52.232977   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:52.232992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:52.283317   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:52.283353   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:52.296648   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:52.296673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:52.423396   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:52.423418   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:52.423429   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:52.497671   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:52.497704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:55.037920   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:55.052539   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:55.052602   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:55.089302   67149 cri.go:89] found id: ""
	I1028 18:29:55.089332   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.089343   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:55.089351   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:55.089404   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:55.127317   67149 cri.go:89] found id: ""
	I1028 18:29:55.127345   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.127352   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:55.127358   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:55.127413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:55.161689   67149 cri.go:89] found id: ""
	I1028 18:29:55.161714   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.161721   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:55.161727   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:55.161772   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:55.196494   67149 cri.go:89] found id: ""
	I1028 18:29:55.196521   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.196534   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:55.196542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:55.196596   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:55.234980   67149 cri.go:89] found id: ""
	I1028 18:29:55.235008   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.235020   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:55.235028   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:55.235086   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:55.274750   67149 cri.go:89] found id: ""
	I1028 18:29:55.274775   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.274783   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:55.274789   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:55.274842   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:55.309839   67149 cri.go:89] found id: ""
	I1028 18:29:55.309865   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.309874   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:55.309881   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:55.309943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:55.358765   67149 cri.go:89] found id: ""
	I1028 18:29:55.358793   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.358805   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:55.358816   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:55.358830   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:55.422821   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:55.422869   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:55.439458   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:55.439482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:55.507743   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:55.507764   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:55.507775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:55.582679   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:55.582710   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:54.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.612967   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:54.402722   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.902816   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:55.372539   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:57.871444   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:58.124907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:58.139125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:58.139181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:58.178829   67149 cri.go:89] found id: ""
	I1028 18:29:58.178853   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.178864   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:58.178871   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:58.178933   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:58.212290   67149 cri.go:89] found id: ""
	I1028 18:29:58.212320   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.212336   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:58.212344   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:58.212402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:58.246108   67149 cri.go:89] found id: ""
	I1028 18:29:58.246135   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.246145   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:58.246152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:58.246212   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:58.280625   67149 cri.go:89] found id: ""
	I1028 18:29:58.280651   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.280662   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:58.280670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:58.280727   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:58.318755   67149 cri.go:89] found id: ""
	I1028 18:29:58.318783   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.318793   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:58.318801   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:58.318853   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:58.356452   67149 cri.go:89] found id: ""
	I1028 18:29:58.356487   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.356499   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:58.356506   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:58.356564   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:58.389906   67149 cri.go:89] found id: ""
	I1028 18:29:58.389928   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.389936   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:58.389943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:58.390001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:58.425883   67149 cri.go:89] found id: ""
	I1028 18:29:58.425911   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.425920   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:58.425929   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:58.425943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:58.484392   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:58.484433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:58.498133   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:58.498159   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:58.572358   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:58.572382   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:58.572397   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:58.654963   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:58.654997   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:58.613408   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.614235   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:59.402355   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.403000   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.370479   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:02.370951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:04.372159   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.196593   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:01.209622   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:01.209693   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:01.243682   67149 cri.go:89] found id: ""
	I1028 18:30:01.243708   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.243718   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:01.243726   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:01.243786   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:01.277617   67149 cri.go:89] found id: ""
	I1028 18:30:01.277646   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.277654   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:01.277660   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:01.277710   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:01.314028   67149 cri.go:89] found id: ""
	I1028 18:30:01.314055   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.314067   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:01.314081   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:01.314152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:01.350324   67149 cri.go:89] found id: ""
	I1028 18:30:01.350348   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.350356   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:01.350362   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:01.350415   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:01.385802   67149 cri.go:89] found id: ""
	I1028 18:30:01.385826   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.385834   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:01.385840   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:01.385883   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:01.421507   67149 cri.go:89] found id: ""
	I1028 18:30:01.421534   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.421545   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:01.421553   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:01.421611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:01.457285   67149 cri.go:89] found id: ""
	I1028 18:30:01.457314   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.457326   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:01.457333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:01.457380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:01.490962   67149 cri.go:89] found id: ""
	I1028 18:30:01.490984   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.490992   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:01.491000   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:01.491012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:01.559906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:01.559937   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:01.559962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:01.639455   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:01.639485   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.681968   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:01.681994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:01.736639   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:01.736672   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.251876   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:04.265639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:04.265711   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:04.300133   67149 cri.go:89] found id: ""
	I1028 18:30:04.300159   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.300167   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:04.300173   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:04.300228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:04.335723   67149 cri.go:89] found id: ""
	I1028 18:30:04.335749   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.335760   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:04.335767   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:04.335825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:04.373009   67149 cri.go:89] found id: ""
	I1028 18:30:04.373030   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.373040   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:04.373048   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:04.373113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:04.405969   67149 cri.go:89] found id: ""
	I1028 18:30:04.405993   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.406003   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:04.406011   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:04.406066   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:04.441067   67149 cri.go:89] found id: ""
	I1028 18:30:04.441095   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.441106   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:04.441112   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:04.441176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:04.475231   67149 cri.go:89] found id: ""
	I1028 18:30:04.475260   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.475270   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:04.475277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:04.475342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:04.512970   67149 cri.go:89] found id: ""
	I1028 18:30:04.512998   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.513009   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:04.513017   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:04.513078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:04.547857   67149 cri.go:89] found id: ""
	I1028 18:30:04.547880   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.547890   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:04.547901   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:04.547913   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:04.598870   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:04.598900   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.612678   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:04.612705   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:04.686945   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:04.686967   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:04.686979   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:04.764943   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:04.764992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:03.113309   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.113449   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.613568   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:03.902735   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.903116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:06.872012   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:09.371576   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.310905   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:07.323880   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:07.323946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:07.363597   67149 cri.go:89] found id: ""
	I1028 18:30:07.363626   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.363637   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:07.363645   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:07.363706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:07.401051   67149 cri.go:89] found id: ""
	I1028 18:30:07.401073   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.401082   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:07.401089   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:07.401147   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:07.439710   67149 cri.go:89] found id: ""
	I1028 18:30:07.439735   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.439743   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:07.439748   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:07.439796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:07.476627   67149 cri.go:89] found id: ""
	I1028 18:30:07.476653   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.476663   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:07.476670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:07.476747   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:07.508770   67149 cri.go:89] found id: ""
	I1028 18:30:07.508796   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.508807   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:07.508814   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:07.508874   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:07.543467   67149 cri.go:89] found id: ""
	I1028 18:30:07.543496   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.543506   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:07.543514   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:07.543575   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:07.577181   67149 cri.go:89] found id: ""
	I1028 18:30:07.577204   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.577212   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:07.577217   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:07.577266   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:07.611862   67149 cri.go:89] found id: ""
	I1028 18:30:07.611886   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.611896   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:07.611906   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:07.611924   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:07.699794   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:07.699833   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.747920   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:07.747948   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:07.797402   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:07.797434   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:07.811752   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:07.811778   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:07.881604   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.382191   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:10.394572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:10.394624   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:10.428941   67149 cri.go:89] found id: ""
	I1028 18:30:10.428973   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.428984   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:10.429004   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:10.429071   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:10.462526   67149 cri.go:89] found id: ""
	I1028 18:30:10.462558   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.462569   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:10.462578   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:10.462641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:10.498472   67149 cri.go:89] found id: ""
	I1028 18:30:10.498495   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.498503   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:10.498509   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:10.498557   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:10.535400   67149 cri.go:89] found id: ""
	I1028 18:30:10.535422   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.535430   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:10.535436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:10.535483   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:10.568961   67149 cri.go:89] found id: ""
	I1028 18:30:10.568981   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.568988   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:10.568994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:10.569041   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:10.601273   67149 cri.go:89] found id: ""
	I1028 18:30:10.601306   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.601318   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:10.601325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:10.601383   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:10.638093   67149 cri.go:89] found id: ""
	I1028 18:30:10.638124   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.638135   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:10.638141   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:10.638203   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:10.674624   67149 cri.go:89] found id: ""
	I1028 18:30:10.674654   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.674665   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:10.674675   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:10.674688   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:10.714568   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:10.714602   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:10.764732   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:10.764765   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:10.778111   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:10.778139   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:10.854488   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.854516   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:10.854531   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:10.113469   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.614268   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:08.401958   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:10.402159   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.402379   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:11.872789   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.372947   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:13.438803   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:13.452322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:13.452397   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:13.487337   67149 cri.go:89] found id: ""
	I1028 18:30:13.487360   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.487369   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:13.487381   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:13.487488   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:13.521992   67149 cri.go:89] found id: ""
	I1028 18:30:13.522024   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.522034   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:13.522041   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:13.522099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:13.555315   67149 cri.go:89] found id: ""
	I1028 18:30:13.555347   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.555363   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:13.555371   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:13.555431   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:13.589401   67149 cri.go:89] found id: ""
	I1028 18:30:13.589425   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.589436   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:13.589445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:13.589493   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:13.629340   67149 cri.go:89] found id: ""
	I1028 18:30:13.629370   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.629385   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:13.629393   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:13.629454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:13.667307   67149 cri.go:89] found id: ""
	I1028 18:30:13.667337   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.667348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:13.667355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:13.667418   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:13.701457   67149 cri.go:89] found id: ""
	I1028 18:30:13.701513   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.701526   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:13.701536   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:13.701594   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:13.737989   67149 cri.go:89] found id: ""
	I1028 18:30:13.738023   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.738033   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:13.738043   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:13.738056   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:13.791743   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:13.791777   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:13.805501   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:13.805529   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:13.882239   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:13.882262   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:13.882276   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.963480   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:13.963516   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:15.112587   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:17.113242   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.901879   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.902869   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.871650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:18.872448   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.502799   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:16.516397   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:16.516456   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:16.551670   67149 cri.go:89] found id: ""
	I1028 18:30:16.551701   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.551712   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:16.551719   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:16.551771   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:16.584390   67149 cri.go:89] found id: ""
	I1028 18:30:16.584417   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.584428   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:16.584435   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:16.584510   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:16.620868   67149 cri.go:89] found id: ""
	I1028 18:30:16.620892   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.620899   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:16.620904   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:16.620949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:16.654189   67149 cri.go:89] found id: ""
	I1028 18:30:16.654216   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.654225   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:16.654231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:16.654284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:16.694526   67149 cri.go:89] found id: ""
	I1028 18:30:16.694557   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.694568   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:16.694575   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:16.694640   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:16.728857   67149 cri.go:89] found id: ""
	I1028 18:30:16.728884   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.728892   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:16.728898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:16.728948   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:16.763198   67149 cri.go:89] found id: ""
	I1028 18:30:16.763220   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.763227   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:16.763232   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:16.763282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:16.800120   67149 cri.go:89] found id: ""
	I1028 18:30:16.800142   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.800149   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:16.800157   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:16.800167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:16.852710   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:16.852736   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:16.867365   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:16.867395   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:16.945605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:16.945627   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:16.945643   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:17.022838   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:17.022871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.563585   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:19.577612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:19.577683   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:19.615797   67149 cri.go:89] found id: ""
	I1028 18:30:19.615820   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.615829   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:19.615836   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:19.615882   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:19.654780   67149 cri.go:89] found id: ""
	I1028 18:30:19.654802   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.654810   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:19.654816   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:19.654873   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:19.693502   67149 cri.go:89] found id: ""
	I1028 18:30:19.693532   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.693542   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:19.693550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:19.693611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:19.731869   67149 cri.go:89] found id: ""
	I1028 18:30:19.731902   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.731910   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:19.731916   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:19.731974   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:19.765046   67149 cri.go:89] found id: ""
	I1028 18:30:19.765081   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.765092   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:19.765099   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:19.765158   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:19.798082   67149 cri.go:89] found id: ""
	I1028 18:30:19.798105   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.798113   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:19.798119   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:19.798172   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:19.832562   67149 cri.go:89] found id: ""
	I1028 18:30:19.832590   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.832601   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:19.832608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:19.832676   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:19.867213   67149 cri.go:89] found id: ""
	I1028 18:30:19.867240   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.867251   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:19.867260   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:19.867277   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:19.942276   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:19.942304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.977642   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:19.977671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:20.027077   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:20.027109   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:20.040159   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:20.040181   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:20.113350   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:19.113850   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.613505   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:19.402671   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.902317   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.372438   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.872137   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:22.614379   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:22.628550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:22.628607   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:22.662647   67149 cri.go:89] found id: ""
	I1028 18:30:22.662670   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.662677   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:22.662683   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:22.662732   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:22.696697   67149 cri.go:89] found id: ""
	I1028 18:30:22.696736   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.696747   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:22.696753   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:22.696815   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:22.730011   67149 cri.go:89] found id: ""
	I1028 18:30:22.730039   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.730049   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:22.730056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:22.730114   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:22.766604   67149 cri.go:89] found id: ""
	I1028 18:30:22.766629   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.766639   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:22.766647   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:22.766703   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:22.800581   67149 cri.go:89] found id: ""
	I1028 18:30:22.800608   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.800617   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:22.800625   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:22.800692   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:22.832742   67149 cri.go:89] found id: ""
	I1028 18:30:22.832767   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.832775   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:22.832780   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:22.832823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:22.865850   67149 cri.go:89] found id: ""
	I1028 18:30:22.865876   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.865885   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:22.865892   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:22.865949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:22.904410   67149 cri.go:89] found id: ""
	I1028 18:30:22.904433   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.904443   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:22.904454   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:22.904482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:22.959275   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:22.959310   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:22.972630   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:22.972652   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:23.043851   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:23.043873   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:23.043886   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:23.121657   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:23.121686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:25.662109   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:25.676366   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:25.676443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:25.715192   67149 cri.go:89] found id: ""
	I1028 18:30:25.715216   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.715224   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:25.715230   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:25.715283   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:25.754736   67149 cri.go:89] found id: ""
	I1028 18:30:25.754765   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.754773   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:25.754779   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:25.754823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:25.794179   67149 cri.go:89] found id: ""
	I1028 18:30:25.794207   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.794216   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:25.794224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:25.794278   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:25.833206   67149 cri.go:89] found id: ""
	I1028 18:30:25.833238   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.833246   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:25.833252   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:25.833298   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:25.871628   67149 cri.go:89] found id: ""
	I1028 18:30:25.871659   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.871669   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:25.871677   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:25.871735   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:25.910900   67149 cri.go:89] found id: ""
	I1028 18:30:25.910924   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.910934   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:25.910942   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:25.911001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:25.943972   67149 cri.go:89] found id: ""
	I1028 18:30:25.943992   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.943999   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:25.944004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:25.944059   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:25.982521   67149 cri.go:89] found id: ""
	I1028 18:30:25.982544   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.982551   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:25.982559   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:25.982569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:26.033003   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:26.033031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:26.046480   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:26.046503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 18:30:24.112244   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.113815   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.902652   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.402135   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:25.873075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.372129   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	W1028 18:30:26.117194   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:26.117213   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:26.117230   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:26.195399   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:26.195430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:28.737237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:28.751846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:28.751910   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:28.794259   67149 cri.go:89] found id: ""
	I1028 18:30:28.794290   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.794301   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:28.794308   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:28.794374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:28.827573   67149 cri.go:89] found id: ""
	I1028 18:30:28.827603   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.827611   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:28.827616   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:28.827671   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:28.860676   67149 cri.go:89] found id: ""
	I1028 18:30:28.860702   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.860713   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:28.860721   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:28.860780   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:28.897302   67149 cri.go:89] found id: ""
	I1028 18:30:28.897327   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.897343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:28.897351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:28.897410   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:28.933425   67149 cri.go:89] found id: ""
	I1028 18:30:28.933454   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.933464   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:28.933471   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:28.933535   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:28.966004   67149 cri.go:89] found id: ""
	I1028 18:30:28.966032   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.966043   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:28.966051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:28.966107   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:29.002788   67149 cri.go:89] found id: ""
	I1028 18:30:29.002818   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.002829   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:29.002835   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:29.002894   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:29.033351   67149 cri.go:89] found id: ""
	I1028 18:30:29.033379   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.033389   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:29.033400   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:29.033420   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:29.107997   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:29.108025   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:29.144727   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:29.144753   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:29.206487   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:29.206521   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:29.219722   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:29.219744   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:29.288254   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:28.612485   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.113113   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.902960   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.871338   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.372081   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.789035   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:31.802587   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:31.802650   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:31.838372   67149 cri.go:89] found id: ""
	I1028 18:30:31.838401   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.838410   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:31.838416   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:31.838469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:31.877794   67149 cri.go:89] found id: ""
	I1028 18:30:31.877822   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.877833   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:31.877840   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:31.877896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:31.917442   67149 cri.go:89] found id: ""
	I1028 18:30:31.917472   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.917483   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:31.917490   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:31.917549   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:31.951900   67149 cri.go:89] found id: ""
	I1028 18:30:31.951931   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.951943   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:31.951951   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:31.952008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:31.988011   67149 cri.go:89] found id: ""
	I1028 18:30:31.988040   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.988051   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:31.988058   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:31.988116   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:32.021042   67149 cri.go:89] found id: ""
	I1028 18:30:32.021063   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.021071   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:32.021077   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:32.021124   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:32.053748   67149 cri.go:89] found id: ""
	I1028 18:30:32.053770   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.053778   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:32.053783   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:32.053837   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:32.089725   67149 cri.go:89] found id: ""
	I1028 18:30:32.089756   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.089766   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:32.089777   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:32.089790   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:32.140000   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:32.140031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:32.154023   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:32.154046   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:32.231222   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:32.231242   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:32.231255   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:32.311354   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:32.311388   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:34.852507   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:34.867133   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:34.867198   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:34.901201   67149 cri.go:89] found id: ""
	I1028 18:30:34.901228   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.901238   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:34.901245   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:34.901300   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:34.962788   67149 cri.go:89] found id: ""
	I1028 18:30:34.962814   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.962824   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:34.962835   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:34.962896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:34.996879   67149 cri.go:89] found id: ""
	I1028 18:30:34.996906   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.996917   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:34.996926   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:34.996986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:35.033516   67149 cri.go:89] found id: ""
	I1028 18:30:35.033541   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.033553   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:35.033560   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:35.033622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:35.066903   67149 cri.go:89] found id: ""
	I1028 18:30:35.066933   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.066945   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:35.066953   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:35.067010   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:35.099675   67149 cri.go:89] found id: ""
	I1028 18:30:35.099697   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.099704   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:35.099710   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:35.099755   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:35.133595   67149 cri.go:89] found id: ""
	I1028 18:30:35.133623   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.133633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:35.133641   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:35.133699   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:35.172236   67149 cri.go:89] found id: ""
	I1028 18:30:35.172262   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.172272   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:35.172282   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:35.172296   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:35.224952   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:35.224981   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:35.238554   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:35.238578   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:35.318991   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:35.319024   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:35.319040   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:35.399763   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:35.399799   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:33.612446   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.613847   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.402375   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.402653   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.902346   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:38.372413   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.947847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:37.963147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:37.963210   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.001768   67149 cri.go:89] found id: ""
	I1028 18:30:38.001792   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.001802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:38.001809   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:38.001868   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:38.042877   67149 cri.go:89] found id: ""
	I1028 18:30:38.042905   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.042916   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:38.042924   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:38.042986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:38.078116   67149 cri.go:89] found id: ""
	I1028 18:30:38.078143   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.078154   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:38.078162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:38.078226   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:38.111082   67149 cri.go:89] found id: ""
	I1028 18:30:38.111108   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.111119   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:38.111127   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:38.111187   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:38.144863   67149 cri.go:89] found id: ""
	I1028 18:30:38.144889   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.144898   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:38.144906   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:38.144962   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:38.178671   67149 cri.go:89] found id: ""
	I1028 18:30:38.178701   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.178712   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:38.178719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:38.178774   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:38.218441   67149 cri.go:89] found id: ""
	I1028 18:30:38.218464   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.218472   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:38.218477   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:38.218528   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:38.252697   67149 cri.go:89] found id: ""
	I1028 18:30:38.252719   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.252727   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:38.252736   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:38.252745   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:38.304813   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:38.304853   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:38.318437   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:38.318462   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:38.389959   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:38.389987   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:38.390002   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:38.471462   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:38.471495   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:41.013647   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:41.027167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:41.027233   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.113426   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:39.903261   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.402381   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.871193   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.873502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:41.062559   67149 cri.go:89] found id: ""
	I1028 18:30:41.062590   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.062601   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:41.062609   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:41.062667   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:41.097732   67149 cri.go:89] found id: ""
	I1028 18:30:41.097758   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.097767   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:41.097773   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:41.097819   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:41.133067   67149 cri.go:89] found id: ""
	I1028 18:30:41.133089   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.133097   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:41.133102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:41.133150   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:41.168640   67149 cri.go:89] found id: ""
	I1028 18:30:41.168674   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.168684   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:41.168691   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:41.168754   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:41.206429   67149 cri.go:89] found id: ""
	I1028 18:30:41.206453   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.206463   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:41.206470   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:41.206527   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:41.248326   67149 cri.go:89] found id: ""
	I1028 18:30:41.248350   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.248360   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:41.248369   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:41.248429   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:41.283703   67149 cri.go:89] found id: ""
	I1028 18:30:41.283734   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.283746   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:41.283753   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:41.283810   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:41.327759   67149 cri.go:89] found id: ""
	I1028 18:30:41.327786   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.327796   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:41.327807   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:41.327820   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:41.388563   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:41.388593   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:41.406411   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:41.406435   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:41.490605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:41.490626   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:41.490637   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:41.569386   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:41.569433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.109394   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:44.123047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:44.123113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:44.156762   67149 cri.go:89] found id: ""
	I1028 18:30:44.156792   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.156802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:44.156810   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:44.156867   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:44.192244   67149 cri.go:89] found id: ""
	I1028 18:30:44.192271   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.192282   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:44.192289   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:44.192357   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:44.224059   67149 cri.go:89] found id: ""
	I1028 18:30:44.224094   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.224101   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:44.224115   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:44.224168   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:44.258750   67149 cri.go:89] found id: ""
	I1028 18:30:44.258779   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.258789   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:44.258797   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:44.258854   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:44.295600   67149 cri.go:89] found id: ""
	I1028 18:30:44.295624   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.295632   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:44.295638   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:44.295684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:44.327278   67149 cri.go:89] found id: ""
	I1028 18:30:44.327302   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.327309   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:44.327315   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:44.327370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:44.360734   67149 cri.go:89] found id: ""
	I1028 18:30:44.360760   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.360768   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:44.360774   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:44.360822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:44.398198   67149 cri.go:89] found id: ""
	I1028 18:30:44.398224   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.398234   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:44.398249   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:44.398261   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:44.476135   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:44.476167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.514073   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:44.514105   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:44.563001   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:44.563033   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:44.576882   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:44.576912   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:44.648532   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:43.112043   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.113135   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.113382   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:44.403147   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:46.902890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.370854   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.371758   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.373946   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.149133   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:47.165612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:47.165696   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:47.203960   67149 cri.go:89] found id: ""
	I1028 18:30:47.203987   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.203996   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:47.204002   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:47.204065   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:47.236731   67149 cri.go:89] found id: ""
	I1028 18:30:47.236757   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.236766   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:47.236774   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:47.236828   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:47.273779   67149 cri.go:89] found id: ""
	I1028 18:30:47.273808   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.273820   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:47.273826   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:47.273878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:47.309996   67149 cri.go:89] found id: ""
	I1028 18:30:47.310020   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.310028   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:47.310034   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:47.310108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:47.352904   67149 cri.go:89] found id: ""
	I1028 18:30:47.352925   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.352934   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:47.352939   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:47.352990   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:47.389641   67149 cri.go:89] found id: ""
	I1028 18:30:47.389660   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.389667   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:47.389672   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:47.389718   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:47.422591   67149 cri.go:89] found id: ""
	I1028 18:30:47.422622   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.422632   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:47.422639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:47.422694   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:47.454849   67149 cri.go:89] found id: ""
	I1028 18:30:47.454876   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.454886   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:47.454895   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:47.454916   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:47.506176   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:47.506203   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:47.519084   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:47.519108   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:47.585660   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.585681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:47.585696   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:47.664904   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:47.664939   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:50.203775   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:50.216923   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:50.216992   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:50.252506   67149 cri.go:89] found id: ""
	I1028 18:30:50.252531   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.252541   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:50.252548   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:50.252608   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:50.288641   67149 cri.go:89] found id: ""
	I1028 18:30:50.288669   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.288678   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:50.288684   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:50.288739   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:50.322130   67149 cri.go:89] found id: ""
	I1028 18:30:50.322163   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.322174   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:50.322182   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:50.322240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:50.359508   67149 cri.go:89] found id: ""
	I1028 18:30:50.359536   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.359546   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:50.359554   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:50.359617   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:50.393571   67149 cri.go:89] found id: ""
	I1028 18:30:50.393607   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.393618   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:50.393626   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:50.393685   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:50.428683   67149 cri.go:89] found id: ""
	I1028 18:30:50.428705   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.428713   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:50.428719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:50.428767   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:50.464086   67149 cri.go:89] found id: ""
	I1028 18:30:50.464111   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.464119   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:50.464125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:50.464183   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:50.496695   67149 cri.go:89] found id: ""
	I1028 18:30:50.496726   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.496736   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:50.496745   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:50.496755   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:50.545495   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:50.545526   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:50.558819   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:50.558852   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:50.636344   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:50.636369   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:50.636384   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:50.720270   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:50.720304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:49.612927   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.613353   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.402779   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.901517   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.873490   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:54.372373   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.261194   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:53.274451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:53.274507   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:53.306258   67149 cri.go:89] found id: ""
	I1028 18:30:53.306286   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.306295   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:53.306301   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:53.306362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:53.340222   67149 cri.go:89] found id: ""
	I1028 18:30:53.340244   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.340253   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:53.340258   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:53.340322   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:53.377726   67149 cri.go:89] found id: ""
	I1028 18:30:53.377750   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.377760   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:53.377767   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:53.377820   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:53.414228   67149 cri.go:89] found id: ""
	I1028 18:30:53.414252   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.414262   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:53.414275   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:53.414332   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:53.449152   67149 cri.go:89] found id: ""
	I1028 18:30:53.449179   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.449186   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:53.449192   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:53.449237   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:53.485678   67149 cri.go:89] found id: ""
	I1028 18:30:53.485705   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.485716   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:53.485723   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:53.485784   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:53.520764   67149 cri.go:89] found id: ""
	I1028 18:30:53.520791   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.520802   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:53.520810   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:53.520870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:53.561153   67149 cri.go:89] found id: ""
	I1028 18:30:53.561176   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.561184   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:53.561192   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:53.561202   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:53.642192   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:53.642242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.686527   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:53.686567   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:53.740815   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:53.740849   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:53.754577   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:53.754604   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:53.823717   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:54.112985   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.612820   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.903128   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:55.903482   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.372798   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.871814   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.324847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:56.338572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:56.338628   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:56.375482   67149 cri.go:89] found id: ""
	I1028 18:30:56.375506   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.375517   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:56.375524   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:56.375580   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:56.407894   67149 cri.go:89] found id: ""
	I1028 18:30:56.407921   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.407931   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:56.407938   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:56.407993   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:56.447006   67149 cri.go:89] found id: ""
	I1028 18:30:56.447037   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.447048   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:56.447055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:56.447112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:56.483850   67149 cri.go:89] found id: ""
	I1028 18:30:56.483880   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.483890   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:56.483898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:56.483958   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:56.520008   67149 cri.go:89] found id: ""
	I1028 18:30:56.520038   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.520045   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:56.520051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:56.520099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:56.552567   67149 cri.go:89] found id: ""
	I1028 18:30:56.552592   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.552600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:56.552608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:56.552658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:56.591277   67149 cri.go:89] found id: ""
	I1028 18:30:56.591297   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.591305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:56.591311   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:56.591362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:56.632164   67149 cri.go:89] found id: ""
	I1028 18:30:56.632186   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.632194   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:56.632202   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:56.632219   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:56.683590   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:56.683623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:56.698509   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:56.698539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:56.777141   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.777171   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:56.777188   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:56.851801   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:56.851842   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.394266   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:59.408460   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:59.408545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:59.444066   67149 cri.go:89] found id: ""
	I1028 18:30:59.444092   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.444104   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:59.444112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:59.444165   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:59.479531   67149 cri.go:89] found id: ""
	I1028 18:30:59.479557   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.479568   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:59.479576   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:59.479622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:59.519467   67149 cri.go:89] found id: ""
	I1028 18:30:59.519489   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.519496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:59.519502   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:59.519546   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:59.551108   67149 cri.go:89] found id: ""
	I1028 18:30:59.551131   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.551140   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:59.551146   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:59.551197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:59.585875   67149 cri.go:89] found id: ""
	I1028 18:30:59.585899   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.585907   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:59.585912   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:59.585968   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:59.620571   67149 cri.go:89] found id: ""
	I1028 18:30:59.620595   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.620602   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:59.620608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:59.620655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:59.653927   67149 cri.go:89] found id: ""
	I1028 18:30:59.653954   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.653965   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:59.653972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:59.654039   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:59.689138   67149 cri.go:89] found id: ""
	I1028 18:30:59.689160   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.689168   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:59.689175   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:59.689185   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:59.768231   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:59.768270   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.811980   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:59.812007   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:59.864509   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:59.864543   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:59.879329   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:59.879354   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:59.950134   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:59.112280   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:01.113341   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.402845   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.902628   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.904642   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.872873   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:03.371672   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.450237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:02.464689   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:02.464765   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:02.500938   67149 cri.go:89] found id: ""
	I1028 18:31:02.500964   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.500975   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:02.500982   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:02.501043   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:02.534580   67149 cri.go:89] found id: ""
	I1028 18:31:02.534608   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.534620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:02.534628   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:02.534684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:02.570203   67149 cri.go:89] found id: ""
	I1028 18:31:02.570224   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.570231   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:02.570237   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:02.570284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:02.606037   67149 cri.go:89] found id: ""
	I1028 18:31:02.606064   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.606072   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:02.606082   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:02.606135   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:02.640622   67149 cri.go:89] found id: ""
	I1028 18:31:02.640646   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.640656   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:02.640663   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:02.640723   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:02.676406   67149 cri.go:89] found id: ""
	I1028 18:31:02.676434   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.676444   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:02.676451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:02.676520   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:02.710284   67149 cri.go:89] found id: ""
	I1028 18:31:02.710308   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.710316   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:02.710322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:02.710376   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:02.750853   67149 cri.go:89] found id: ""
	I1028 18:31:02.750899   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.750910   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:02.750918   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:02.750929   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:02.825886   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.825913   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:02.825927   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:02.904828   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:02.904857   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:02.941886   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:02.941922   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:02.991603   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:02.991632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.505655   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:05.520582   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:05.520638   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:05.558724   67149 cri.go:89] found id: ""
	I1028 18:31:05.558753   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.558763   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:05.558770   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:05.558816   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:05.597864   67149 cri.go:89] found id: ""
	I1028 18:31:05.597885   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.597893   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:05.597898   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:05.597956   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:05.643571   67149 cri.go:89] found id: ""
	I1028 18:31:05.643602   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.643613   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:05.643620   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:05.643679   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:05.682010   67149 cri.go:89] found id: ""
	I1028 18:31:05.682039   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.682048   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:05.682053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:05.682106   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:05.716043   67149 cri.go:89] found id: ""
	I1028 18:31:05.716067   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.716080   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:05.716086   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:05.716134   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:05.750962   67149 cri.go:89] found id: ""
	I1028 18:31:05.750995   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.751010   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:05.751016   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:05.751078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:05.785059   67149 cri.go:89] found id: ""
	I1028 18:31:05.785111   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.785124   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:05.785132   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:05.785193   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:05.833525   67149 cri.go:89] found id: ""
	I1028 18:31:05.833550   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.833559   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:05.833566   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:05.833579   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:05.887766   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:05.887796   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.902575   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:05.902606   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:05.975082   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:05.975108   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:05.975122   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:03.613265   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.114362   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.402167   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:07.402252   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.873147   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:08.370748   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.050369   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:06.050396   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.593506   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:08.606188   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:08.606251   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:08.645186   67149 cri.go:89] found id: ""
	I1028 18:31:08.645217   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.645227   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:08.645235   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:08.645294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:08.680728   67149 cri.go:89] found id: ""
	I1028 18:31:08.680759   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.680771   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:08.680778   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:08.680833   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:08.714733   67149 cri.go:89] found id: ""
	I1028 18:31:08.714760   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.714772   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:08.714779   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:08.714844   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:08.750293   67149 cri.go:89] found id: ""
	I1028 18:31:08.750323   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.750333   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:08.750339   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:08.750390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:08.784521   67149 cri.go:89] found id: ""
	I1028 18:31:08.784548   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.784559   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:08.784566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:08.784629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:08.818808   67149 cri.go:89] found id: ""
	I1028 18:31:08.818838   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.818849   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:08.818857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:08.818920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:08.855575   67149 cri.go:89] found id: ""
	I1028 18:31:08.855608   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.855619   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:08.855633   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:08.855690   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:08.892996   67149 cri.go:89] found id: ""
	I1028 18:31:08.893024   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.893035   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:08.893045   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:08.893064   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.937056   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:08.937084   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:08.989013   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:08.989048   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:09.002048   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:09.002077   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:09.075247   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:09.075277   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:09.075290   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:08.612396   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.612689   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:09.402595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.903403   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.371335   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:12.371435   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.371502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.654701   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:11.668066   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:11.668146   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:11.701666   67149 cri.go:89] found id: ""
	I1028 18:31:11.701693   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.701703   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:11.701710   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:11.701769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:11.738342   67149 cri.go:89] found id: ""
	I1028 18:31:11.738368   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.738376   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:11.738381   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:11.738428   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:11.772009   67149 cri.go:89] found id: ""
	I1028 18:31:11.772035   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.772045   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:11.772053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:11.772118   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:11.816210   67149 cri.go:89] found id: ""
	I1028 18:31:11.816237   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.816245   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:11.816251   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:11.816314   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:11.856675   67149 cri.go:89] found id: ""
	I1028 18:31:11.856704   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.856714   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:11.856722   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:11.856785   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:11.896566   67149 cri.go:89] found id: ""
	I1028 18:31:11.896592   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.896600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:11.896606   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:11.896665   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:11.932599   67149 cri.go:89] found id: ""
	I1028 18:31:11.932624   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.932633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:11.932640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:11.932704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:11.966952   67149 cri.go:89] found id: ""
	I1028 18:31:11.966982   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.967008   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:11.967019   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:11.967037   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:12.016465   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:12.016502   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:12.029314   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:12.029343   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:12.098906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:12.098936   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:12.098954   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:12.176440   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:12.176489   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:14.720173   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:14.733796   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:14.733848   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:14.774072   67149 cri.go:89] found id: ""
	I1028 18:31:14.774093   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.774100   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:14.774106   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:14.774152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:14.816116   67149 cri.go:89] found id: ""
	I1028 18:31:14.816145   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.816158   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:14.816166   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:14.816224   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:14.851167   67149 cri.go:89] found id: ""
	I1028 18:31:14.851189   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.851196   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:14.851202   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:14.851247   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:14.885887   67149 cri.go:89] found id: ""
	I1028 18:31:14.885918   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.885926   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:14.885931   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:14.885976   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:14.923787   67149 cri.go:89] found id: ""
	I1028 18:31:14.923815   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.923826   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:14.923833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:14.923892   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:14.960117   67149 cri.go:89] found id: ""
	I1028 18:31:14.960148   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.960160   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:14.960167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:14.960240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:14.998418   67149 cri.go:89] found id: ""
	I1028 18:31:14.998458   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.998470   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:14.998485   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:14.998545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:15.031985   67149 cri.go:89] found id: ""
	I1028 18:31:15.032005   67149 logs.go:282] 0 containers: []
	W1028 18:31:15.032014   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:15.032027   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:15.032038   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:15.045239   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:15.045264   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:15.118954   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:15.118978   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:15.118994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:15.200538   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:15.200569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:15.243581   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:15.243603   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:13.112232   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:15.113498   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.612946   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.401769   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.402729   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.871916   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.872378   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.794670   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:17.808325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:17.808380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:17.841888   67149 cri.go:89] found id: ""
	I1028 18:31:17.841911   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.841919   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:17.841925   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:17.841979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:17.881241   67149 cri.go:89] found id: ""
	I1028 18:31:17.881261   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.881269   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:17.881274   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:17.881331   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:17.922394   67149 cri.go:89] found id: ""
	I1028 18:31:17.922419   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.922428   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:17.922434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:17.922498   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:17.963519   67149 cri.go:89] found id: ""
	I1028 18:31:17.963546   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.963558   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:17.963566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:17.963641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:18.003181   67149 cri.go:89] found id: ""
	I1028 18:31:18.003202   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.003209   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:18.003214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:18.003261   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:18.040305   67149 cri.go:89] found id: ""
	I1028 18:31:18.040338   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.040348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:18.040356   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:18.040413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:18.077671   67149 cri.go:89] found id: ""
	I1028 18:31:18.077696   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.077708   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:18.077715   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:18.077777   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:18.116155   67149 cri.go:89] found id: ""
	I1028 18:31:18.116176   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.116182   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:18.116190   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:18.116201   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:18.168343   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:18.168370   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:18.181962   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:18.181988   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:18.260227   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:18.260251   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:18.260265   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:18.346588   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:18.346620   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:20.885832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:20.899053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:20.899121   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:20.954770   67149 cri.go:89] found id: ""
	I1028 18:31:20.954797   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.954806   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:20.954812   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:20.954870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:20.989809   67149 cri.go:89] found id: ""
	I1028 18:31:20.989834   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.989842   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:20.989848   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:20.989900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:21.027150   67149 cri.go:89] found id: ""
	I1028 18:31:21.027179   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.027191   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:21.027199   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:21.027259   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:20.113283   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:22.612710   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.902738   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.403607   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.371574   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.871000   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.061235   67149 cri.go:89] found id: ""
	I1028 18:31:21.061260   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.061270   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:21.061277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:21.061337   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:21.095451   67149 cri.go:89] found id: ""
	I1028 18:31:21.095473   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.095481   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:21.095487   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:21.095540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:21.135576   67149 cri.go:89] found id: ""
	I1028 18:31:21.135598   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.135606   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:21.135612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:21.135660   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:21.170816   67149 cri.go:89] found id: ""
	I1028 18:31:21.170845   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.170854   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:21.170860   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:21.170920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:21.204616   67149 cri.go:89] found id: ""
	I1028 18:31:21.204649   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.204660   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:21.204672   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:21.204686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:21.254523   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:21.254556   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:21.267981   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:21.268005   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:21.336786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:21.336813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:21.336828   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:21.420596   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:21.420625   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:23.962346   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:23.976628   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:23.976697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:24.016418   67149 cri.go:89] found id: ""
	I1028 18:31:24.016444   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.016453   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:24.016458   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:24.016533   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:24.051448   67149 cri.go:89] found id: ""
	I1028 18:31:24.051474   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.051483   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:24.051488   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:24.051554   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:24.090787   67149 cri.go:89] found id: ""
	I1028 18:31:24.090816   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.090829   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:24.090836   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:24.090900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:24.126315   67149 cri.go:89] found id: ""
	I1028 18:31:24.126342   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.126349   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:24.126355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:24.126402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:24.161340   67149 cri.go:89] found id: ""
	I1028 18:31:24.161367   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.161379   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:24.161387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:24.161445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:24.195991   67149 cri.go:89] found id: ""
	I1028 18:31:24.196017   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.196028   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:24.196036   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:24.196084   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:24.229789   67149 cri.go:89] found id: ""
	I1028 18:31:24.229822   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.229837   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:24.229845   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:24.229906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:24.264724   67149 cri.go:89] found id: ""
	I1028 18:31:24.264748   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.264757   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:24.264765   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:24.264775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:24.303551   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:24.303574   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:24.351693   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:24.351725   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:24.364537   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:24.364566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:24.436935   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:24.436955   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:24.436966   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:25.112870   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.612492   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.902008   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.902544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.902622   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.871089   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.871265   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:29.872201   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.014928   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:27.029540   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:27.029609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:27.064598   67149 cri.go:89] found id: ""
	I1028 18:31:27.064626   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.064636   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:27.064643   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:27.064704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:27.099432   67149 cri.go:89] found id: ""
	I1028 18:31:27.099455   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.099465   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:27.099472   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:27.099531   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:27.133961   67149 cri.go:89] found id: ""
	I1028 18:31:27.133996   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.134006   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:27.134012   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:27.134075   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:27.171976   67149 cri.go:89] found id: ""
	I1028 18:31:27.172003   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.172014   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:27.172022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:27.172092   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:27.205681   67149 cri.go:89] found id: ""
	I1028 18:31:27.205710   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.205721   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:27.205730   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:27.205793   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:27.244571   67149 cri.go:89] found id: ""
	I1028 18:31:27.244603   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.244612   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:27.244617   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:27.244661   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:27.281692   67149 cri.go:89] found id: ""
	I1028 18:31:27.281722   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.281738   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:27.281746   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:27.281800   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:27.335003   67149 cri.go:89] found id: ""
	I1028 18:31:27.335033   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.335041   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:27.335049   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:27.335066   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:27.353992   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:27.354017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:27.457103   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:27.457125   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:27.457136   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.537717   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:27.537746   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:27.579842   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:27.579870   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.133749   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:30.147518   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:30.147576   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:30.182683   67149 cri.go:89] found id: ""
	I1028 18:31:30.182711   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.182722   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:30.182729   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:30.182792   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:30.215088   67149 cri.go:89] found id: ""
	I1028 18:31:30.215109   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.215118   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:30.215124   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:30.215176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:30.250169   67149 cri.go:89] found id: ""
	I1028 18:31:30.250194   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.250202   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:30.250207   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:30.250284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:30.286028   67149 cri.go:89] found id: ""
	I1028 18:31:30.286055   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.286062   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:30.286069   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:30.286112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:30.320503   67149 cri.go:89] found id: ""
	I1028 18:31:30.320528   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.320539   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:30.320547   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:30.320604   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:30.352773   67149 cri.go:89] found id: ""
	I1028 18:31:30.352793   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.352800   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:30.352806   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:30.352859   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:30.385922   67149 cri.go:89] found id: ""
	I1028 18:31:30.385944   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.385951   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:30.385956   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:30.385999   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:30.421909   67149 cri.go:89] found id: ""
	I1028 18:31:30.421933   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.421945   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:30.421956   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:30.421971   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.470917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:30.470944   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:30.484033   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:30.484059   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:30.554810   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:30.554836   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:30.554850   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:30.634403   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:30.634432   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:30.113496   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.613397   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:30.402688   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.902277   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:31.872598   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:34.371198   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:33.182127   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:33.194994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:33.195063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:33.233076   67149 cri.go:89] found id: ""
	I1028 18:31:33.233098   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.233106   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:33.233112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:33.233160   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:33.266963   67149 cri.go:89] found id: ""
	I1028 18:31:33.266998   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.267021   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:33.267028   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:33.267083   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:33.305888   67149 cri.go:89] found id: ""
	I1028 18:31:33.305914   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.305922   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:33.305928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:33.305979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:33.339451   67149 cri.go:89] found id: ""
	I1028 18:31:33.339479   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.339489   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:33.339496   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:33.339555   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:33.375038   67149 cri.go:89] found id: ""
	I1028 18:31:33.375065   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.375073   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:33.375079   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:33.375125   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:33.409157   67149 cri.go:89] found id: ""
	I1028 18:31:33.409176   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.409183   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:33.409189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:33.409243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:33.449108   67149 cri.go:89] found id: ""
	I1028 18:31:33.449133   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.449149   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:33.449155   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:33.449227   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:33.491194   67149 cri.go:89] found id: ""
	I1028 18:31:33.491215   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.491224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:33.491232   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:33.491250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.530590   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:33.530618   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:33.581933   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:33.581962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:33.595387   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:33.595416   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:33.664855   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:33.664882   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:33.664899   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:35.113673   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.612606   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:35.401938   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.402270   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.372499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:38.372670   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.242724   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:36.256152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:36.256221   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:36.292452   67149 cri.go:89] found id: ""
	I1028 18:31:36.292494   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.292504   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:36.292511   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:36.292568   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:36.325210   67149 cri.go:89] found id: ""
	I1028 18:31:36.325231   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.325238   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:36.325244   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:36.325293   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:36.356738   67149 cri.go:89] found id: ""
	I1028 18:31:36.356757   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.356764   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:36.356769   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:36.356827   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:36.389678   67149 cri.go:89] found id: ""
	I1028 18:31:36.389704   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.389712   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:36.389717   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:36.389775   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:36.422956   67149 cri.go:89] found id: ""
	I1028 18:31:36.422989   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.422998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:36.423005   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:36.423061   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:36.456877   67149 cri.go:89] found id: ""
	I1028 18:31:36.456904   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.456914   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:36.456921   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:36.456983   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:36.489728   67149 cri.go:89] found id: ""
	I1028 18:31:36.489758   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.489766   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:36.489772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:36.489829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:36.524307   67149 cri.go:89] found id: ""
	I1028 18:31:36.524338   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.524350   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:36.524360   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:36.524372   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:36.574771   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:36.574800   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:36.587485   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:36.587506   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:36.655922   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:36.655949   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:36.655962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.738312   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:36.738352   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.279425   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:39.293108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:39.293167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:39.325542   67149 cri.go:89] found id: ""
	I1028 18:31:39.325573   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.325584   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:39.325592   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:39.325656   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:39.357581   67149 cri.go:89] found id: ""
	I1028 18:31:39.357609   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.357620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:39.357627   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:39.357681   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:39.394833   67149 cri.go:89] found id: ""
	I1028 18:31:39.394853   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.394860   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:39.394866   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:39.394916   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:39.430151   67149 cri.go:89] found id: ""
	I1028 18:31:39.430178   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.430188   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:39.430196   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:39.430254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:39.468060   67149 cri.go:89] found id: ""
	I1028 18:31:39.468089   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.468100   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:39.468108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:39.468181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:39.503702   67149 cri.go:89] found id: ""
	I1028 18:31:39.503734   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.503752   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:39.503761   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:39.503829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:39.536193   67149 cri.go:89] found id: ""
	I1028 18:31:39.536221   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.536233   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:39.536240   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:39.536305   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:39.570194   67149 cri.go:89] found id: ""
	I1028 18:31:39.570215   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.570224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:39.570232   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:39.570245   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:39.647179   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:39.647207   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:39.647220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:39.725980   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:39.726012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.765671   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:39.765704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:39.818257   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:39.818289   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:39.614055   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.112561   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:39.902061   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.402314   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:40.871483   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.872270   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.332335   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:42.344964   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:42.345031   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:42.380904   67149 cri.go:89] found id: ""
	I1028 18:31:42.380926   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.380933   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:42.380938   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:42.380982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:42.414361   67149 cri.go:89] found id: ""
	I1028 18:31:42.414385   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.414393   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:42.414399   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:42.414443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:42.447931   67149 cri.go:89] found id: ""
	I1028 18:31:42.447957   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.447968   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:42.447975   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:42.448024   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:42.483262   67149 cri.go:89] found id: ""
	I1028 18:31:42.483283   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.483296   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:42.483301   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:42.483365   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:42.516665   67149 cri.go:89] found id: ""
	I1028 18:31:42.516693   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.516702   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:42.516709   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:42.516776   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:42.550160   67149 cri.go:89] found id: ""
	I1028 18:31:42.550181   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.550188   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:42.550193   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:42.550238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:42.583509   67149 cri.go:89] found id: ""
	I1028 18:31:42.583535   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.583546   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:42.583552   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:42.583611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:42.619276   67149 cri.go:89] found id: ""
	I1028 18:31:42.619312   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.619320   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:42.619328   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:42.619338   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:42.692442   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:42.692487   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:42.731768   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:42.731798   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:42.783997   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:42.784043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.797809   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:42.797834   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:42.863351   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.363648   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:45.376277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:45.376341   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:45.415231   67149 cri.go:89] found id: ""
	I1028 18:31:45.415255   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.415265   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:45.415273   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:45.415330   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:45.451133   67149 cri.go:89] found id: ""
	I1028 18:31:45.451157   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.451164   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:45.451170   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:45.451228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:45.483526   67149 cri.go:89] found id: ""
	I1028 18:31:45.483552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.483562   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:45.483567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:45.483621   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:45.515799   67149 cri.go:89] found id: ""
	I1028 18:31:45.515828   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.515838   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:45.515846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:45.515906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:45.548043   67149 cri.go:89] found id: ""
	I1028 18:31:45.548071   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.548082   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:45.548090   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:45.548153   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:45.581525   67149 cri.go:89] found id: ""
	I1028 18:31:45.581552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.581563   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:45.581570   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:45.581629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:45.622258   67149 cri.go:89] found id: ""
	I1028 18:31:45.622282   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.622290   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:45.622296   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:45.622353   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:45.661255   67149 cri.go:89] found id: ""
	I1028 18:31:45.661275   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.661284   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:45.661292   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:45.661304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:45.675209   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:45.675242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:45.737546   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.737573   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:45.737592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:45.816012   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:45.816043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:45.854135   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:45.854167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:44.612155   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.612875   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:44.402557   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.902339   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:45.371918   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:47.872710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.875644   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:48.406233   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:48.418950   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:48.419001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:48.452933   67149 cri.go:89] found id: ""
	I1028 18:31:48.452952   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.452961   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:48.452975   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:48.453034   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:48.489604   67149 cri.go:89] found id: ""
	I1028 18:31:48.489630   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.489640   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:48.489648   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:48.489706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:48.525463   67149 cri.go:89] found id: ""
	I1028 18:31:48.525493   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.525504   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:48.525511   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:48.525566   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:48.559266   67149 cri.go:89] found id: ""
	I1028 18:31:48.559294   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.559302   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:48.559308   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:48.559363   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:48.592670   67149 cri.go:89] found id: ""
	I1028 18:31:48.592695   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.592706   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:48.592714   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:48.592769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:48.627175   67149 cri.go:89] found id: ""
	I1028 18:31:48.627196   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.627205   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:48.627213   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:48.627260   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:48.661864   67149 cri.go:89] found id: ""
	I1028 18:31:48.661887   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.661895   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:48.661901   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:48.661946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:48.696731   67149 cri.go:89] found id: ""
	I1028 18:31:48.696756   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.696765   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:48.696775   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:48.696788   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.745390   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:48.745417   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:48.759218   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:48.759241   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:48.830299   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:48.830331   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:48.830349   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:48.909934   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:48.909963   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:49.112884   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.613217   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.402707   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.903103   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:52.373283   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.872603   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.451597   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:51.464889   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:51.464943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:51.499962   67149 cri.go:89] found id: ""
	I1028 18:31:51.499990   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.500001   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:51.500010   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:51.500069   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:51.532341   67149 cri.go:89] found id: ""
	I1028 18:31:51.532370   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.532380   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:51.532388   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:51.532443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:51.565531   67149 cri.go:89] found id: ""
	I1028 18:31:51.565554   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.565561   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:51.565567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:51.565614   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:51.602859   67149 cri.go:89] found id: ""
	I1028 18:31:51.602882   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.602894   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:51.602899   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:51.602943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:51.639896   67149 cri.go:89] found id: ""
	I1028 18:31:51.639915   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.639922   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:51.639928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:51.639972   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:51.675728   67149 cri.go:89] found id: ""
	I1028 18:31:51.675755   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.675762   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:51.675768   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:51.675825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:51.710285   67149 cri.go:89] found id: ""
	I1028 18:31:51.710312   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.710320   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:51.710326   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:51.710374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:51.744527   67149 cri.go:89] found id: ""
	I1028 18:31:51.744551   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.744560   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:51.744570   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:51.744584   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.780580   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:51.780614   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:51.832979   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:51.833008   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:51.846389   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:51.846415   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:51.918177   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:51.918196   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:51.918210   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.493806   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:54.506468   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:54.506526   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:54.540500   67149 cri.go:89] found id: ""
	I1028 18:31:54.540527   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.540537   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:54.540544   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:54.540601   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:54.573399   67149 cri.go:89] found id: ""
	I1028 18:31:54.573428   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.573438   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:54.573448   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:54.573509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:54.606227   67149 cri.go:89] found id: ""
	I1028 18:31:54.606262   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.606272   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:54.606278   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:54.606338   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:54.641143   67149 cri.go:89] found id: ""
	I1028 18:31:54.641163   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.641172   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:54.641179   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:54.641238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:54.674269   67149 cri.go:89] found id: ""
	I1028 18:31:54.674292   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.674300   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:54.674306   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:54.674352   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:54.707160   67149 cri.go:89] found id: ""
	I1028 18:31:54.707183   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.707191   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:54.707197   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:54.707242   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:54.746522   67149 cri.go:89] found id: ""
	I1028 18:31:54.746544   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.746552   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:54.746558   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:54.746613   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:54.779315   67149 cri.go:89] found id: ""
	I1028 18:31:54.779341   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.779348   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:54.779356   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:54.779367   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:54.830987   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:54.831017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:54.844846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:54.844871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:54.913540   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:54.913558   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:54.913568   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.994220   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:54.994250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:54.112785   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.114029   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.401657   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.402726   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.371756   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:59.372308   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.532820   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:57.545394   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:57.545454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:57.582329   67149 cri.go:89] found id: ""
	I1028 18:31:57.582355   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.582365   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:57.582372   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:57.582438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:57.616082   67149 cri.go:89] found id: ""
	I1028 18:31:57.616107   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.616115   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:57.616123   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:57.616167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:57.650118   67149 cri.go:89] found id: ""
	I1028 18:31:57.650144   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.650153   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:57.650162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:57.650215   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:57.684801   67149 cri.go:89] found id: ""
	I1028 18:31:57.684823   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.684831   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:57.684839   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:57.684887   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:57.722396   67149 cri.go:89] found id: ""
	I1028 18:31:57.722423   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.722431   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:57.722437   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:57.722516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:57.759779   67149 cri.go:89] found id: ""
	I1028 18:31:57.759802   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.759809   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:57.759818   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:57.759861   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:57.793977   67149 cri.go:89] found id: ""
	I1028 18:31:57.794034   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.794045   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:57.794053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:57.794117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:57.831104   67149 cri.go:89] found id: ""
	I1028 18:31:57.831130   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.831140   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:57.831151   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:57.831164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:57.920155   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:57.920174   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:57.920184   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:57.999677   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:57.999709   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:58.036647   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:58.036673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:58.088299   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:58.088333   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.601832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:00.615434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:00.615491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:00.653344   67149 cri.go:89] found id: ""
	I1028 18:32:00.653372   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.653383   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:00.653390   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:00.653450   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:00.693086   67149 cri.go:89] found id: ""
	I1028 18:32:00.693111   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.693122   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:00.693130   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:00.693188   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:00.728129   67149 cri.go:89] found id: ""
	I1028 18:32:00.728157   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.728167   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:00.728181   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:00.728243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:00.760540   67149 cri.go:89] found id: ""
	I1028 18:32:00.760568   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.760579   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:00.760586   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:00.760654   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:00.796633   67149 cri.go:89] found id: ""
	I1028 18:32:00.796662   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.796672   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:00.796680   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:00.796740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:00.829924   67149 cri.go:89] found id: ""
	I1028 18:32:00.829954   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.829966   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:00.829974   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:00.830028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:00.861565   67149 cri.go:89] found id: ""
	I1028 18:32:00.861586   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.861593   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:00.861599   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:00.861655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:00.894129   67149 cri.go:89] found id: ""
	I1028 18:32:00.894154   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.894162   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:00.894169   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:00.894180   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.908303   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:00.908331   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:00.974521   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:00.974543   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:00.974557   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:58.612554   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.612655   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:58.901908   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.902851   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.872423   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.873235   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.048113   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:01.048140   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:01.086657   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:01.086731   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.639781   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:03.652239   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:03.652291   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:03.687098   67149 cri.go:89] found id: ""
	I1028 18:32:03.687120   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.687129   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:03.687135   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:03.687181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:03.722176   67149 cri.go:89] found id: ""
	I1028 18:32:03.722206   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.722217   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:03.722225   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:03.722282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:03.757489   67149 cri.go:89] found id: ""
	I1028 18:32:03.757512   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.757520   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:03.757526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:03.757571   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:03.795359   67149 cri.go:89] found id: ""
	I1028 18:32:03.795400   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.795411   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:03.795429   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:03.795489   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:03.830919   67149 cri.go:89] found id: ""
	I1028 18:32:03.830945   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.830953   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:03.830958   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:03.831008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:03.863396   67149 cri.go:89] found id: ""
	I1028 18:32:03.863425   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.863437   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:03.863445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:03.863516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:03.897085   67149 cri.go:89] found id: ""
	I1028 18:32:03.897112   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.897121   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:03.897128   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:03.897189   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:03.929439   67149 cri.go:89] found id: ""
	I1028 18:32:03.929467   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.929478   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:03.929487   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:03.929503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.982917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:03.982943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:03.996333   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:03.996355   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:04.062786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:04.062813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:04.062827   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:04.143988   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:04.144016   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:03.113499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.612544   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.620294   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.402246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.402730   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.904429   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.373120   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:08.871662   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.683977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:06.696605   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:06.696680   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:06.733031   67149 cri.go:89] found id: ""
	I1028 18:32:06.733060   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.733070   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:06.733078   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:06.733138   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:06.769196   67149 cri.go:89] found id: ""
	I1028 18:32:06.769218   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.769225   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:06.769231   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:06.769280   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:06.806938   67149 cri.go:89] found id: ""
	I1028 18:32:06.806959   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.806966   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:06.806972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:06.807017   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:06.839506   67149 cri.go:89] found id: ""
	I1028 18:32:06.839528   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.839537   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:06.839542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:06.839587   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:06.878275   67149 cri.go:89] found id: ""
	I1028 18:32:06.878300   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.878309   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:06.878317   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:06.878382   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:06.916336   67149 cri.go:89] found id: ""
	I1028 18:32:06.916366   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.916374   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:06.916381   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:06.916434   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:06.971413   67149 cri.go:89] found id: ""
	I1028 18:32:06.971435   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.971443   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:06.971449   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:06.971494   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:07.004432   67149 cri.go:89] found id: ""
	I1028 18:32:07.004464   67149 logs.go:282] 0 containers: []
	W1028 18:32:07.004485   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:07.004496   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:07.004509   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:07.081741   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:07.081780   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:07.122022   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:07.122053   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:07.169470   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:07.169496   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:07.183433   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:07.183459   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:07.251765   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:09.752773   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:09.766042   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:09.766119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:09.802881   67149 cri.go:89] found id: ""
	I1028 18:32:09.802911   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.802923   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:09.802930   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:09.802987   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:09.840269   67149 cri.go:89] found id: ""
	I1028 18:32:09.840292   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.840300   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:09.840305   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:09.840370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:09.874654   67149 cri.go:89] found id: ""
	I1028 18:32:09.874679   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.874689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:09.874696   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:09.874752   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:09.910328   67149 cri.go:89] found id: ""
	I1028 18:32:09.910350   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.910358   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:09.910365   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:09.910425   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:09.942717   67149 cri.go:89] found id: ""
	I1028 18:32:09.942744   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.942752   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:09.942757   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:09.942814   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:09.975644   67149 cri.go:89] found id: ""
	I1028 18:32:09.975674   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.975685   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:09.975692   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:09.975750   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:10.008257   67149 cri.go:89] found id: ""
	I1028 18:32:10.008294   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.008305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:10.008313   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:10.008373   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:10.041678   67149 cri.go:89] found id: ""
	I1028 18:32:10.041705   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.041716   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:10.041726   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:10.041739   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:10.090474   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:10.090503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:10.103846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:10.103874   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:10.172819   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:10.172847   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:10.172862   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:10.251927   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:10.251955   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:10.112553   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.113090   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:10.401890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.902888   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:11.371860   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:13.373112   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.795985   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:12.810859   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:12.810921   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:12.849897   67149 cri.go:89] found id: ""
	I1028 18:32:12.849925   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.849934   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:12.849940   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:12.850003   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:12.883007   67149 cri.go:89] found id: ""
	I1028 18:32:12.883034   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.883045   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:12.883052   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:12.883111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:12.917458   67149 cri.go:89] found id: ""
	I1028 18:32:12.917485   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.917496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:12.917503   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:12.917561   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:12.950531   67149 cri.go:89] found id: ""
	I1028 18:32:12.950558   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.950568   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:12.950576   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:12.950631   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:12.983902   67149 cri.go:89] found id: ""
	I1028 18:32:12.983929   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.983937   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:12.983943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:12.983986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:13.017486   67149 cri.go:89] found id: ""
	I1028 18:32:13.017513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.017521   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:13.017526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:13.017582   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:13.050553   67149 cri.go:89] found id: ""
	I1028 18:32:13.050582   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.050594   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:13.050601   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:13.050658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:13.083489   67149 cri.go:89] found id: ""
	I1028 18:32:13.083513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.083520   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:13.083528   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:13.083537   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:13.137451   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:13.137482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:13.153154   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:13.153179   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:13.221043   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:13.221066   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:13.221080   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:13.299930   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:13.299960   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:15.850484   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:15.862930   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:15.862982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:15.895625   67149 cri.go:89] found id: ""
	I1028 18:32:15.895643   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.895651   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:15.895657   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:15.895701   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:15.928073   67149 cri.go:89] found id: ""
	I1028 18:32:15.928103   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.928113   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:15.928120   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:15.928180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:15.962261   67149 cri.go:89] found id: ""
	I1028 18:32:15.962282   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.962290   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:15.962295   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:15.962342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:15.999177   67149 cri.go:89] found id: ""
	I1028 18:32:15.999206   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.999216   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:15.999224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:15.999282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:16.033098   67149 cri.go:89] found id: ""
	I1028 18:32:16.033126   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.033138   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:16.033145   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:16.033208   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:14.612739   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.112266   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.401576   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.401773   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:18.372059   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:16.067049   67149 cri.go:89] found id: ""
	I1028 18:32:16.067071   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.067083   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:16.067089   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:16.067145   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:16.106936   67149 cri.go:89] found id: ""
	I1028 18:32:16.106970   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.106981   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:16.106988   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:16.107044   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:16.141702   67149 cri.go:89] found id: ""
	I1028 18:32:16.141729   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.141741   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:16.141751   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:16.141762   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:16.178772   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:16.178803   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:16.230851   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:16.230878   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:16.244489   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:16.244514   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:16.319362   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:16.319389   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:16.319405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:18.899694   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:18.913287   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:18.913358   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:18.954136   67149 cri.go:89] found id: ""
	I1028 18:32:18.954158   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.954165   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:18.954170   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:18.954218   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:18.987427   67149 cri.go:89] found id: ""
	I1028 18:32:18.987449   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.987457   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:18.987462   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:18.987505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:19.022067   67149 cri.go:89] found id: ""
	I1028 18:32:19.022099   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.022110   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:19.022118   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:19.022167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:19.054533   67149 cri.go:89] found id: ""
	I1028 18:32:19.054560   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.054570   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:19.054578   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:19.054644   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:19.099324   67149 cri.go:89] found id: ""
	I1028 18:32:19.099356   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.099367   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:19.099375   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:19.099436   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:19.146437   67149 cri.go:89] found id: ""
	I1028 18:32:19.146463   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.146470   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:19.146478   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:19.146540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:19.192027   67149 cri.go:89] found id: ""
	I1028 18:32:19.192053   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.192070   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:19.192078   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:19.192140   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:19.228411   67149 cri.go:89] found id: ""
	I1028 18:32:19.228437   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.228447   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:19.228457   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:19.228480   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:19.313151   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:19.313183   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:19.352117   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:19.352142   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:19.402772   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:19.402805   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:19.416148   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:19.416167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:19.483098   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:19.112720   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.611924   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:19.403635   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.902116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:20.872280   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:22.872726   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.983420   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:21.997129   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:21.997180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:22.035600   67149 cri.go:89] found id: ""
	I1028 18:32:22.035622   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.035631   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:22.035637   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:22.035684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:22.073413   67149 cri.go:89] found id: ""
	I1028 18:32:22.073440   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.073450   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:22.073458   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:22.073505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:22.108637   67149 cri.go:89] found id: ""
	I1028 18:32:22.108663   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.108673   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:22.108682   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:22.108740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:22.145837   67149 cri.go:89] found id: ""
	I1028 18:32:22.145860   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.145867   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:22.145873   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:22.145928   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:22.183830   67149 cri.go:89] found id: ""
	I1028 18:32:22.183855   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.183864   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:22.183869   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:22.183917   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:22.221402   67149 cri.go:89] found id: ""
	I1028 18:32:22.221423   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.221430   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:22.221436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:22.221484   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:22.262193   67149 cri.go:89] found id: ""
	I1028 18:32:22.262220   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.262229   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:22.262234   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:22.262297   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:22.298774   67149 cri.go:89] found id: ""
	I1028 18:32:22.298797   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.298808   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:22.298819   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:22.298831   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:22.348677   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:22.348716   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:22.362199   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:22.362220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:22.429304   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:22.429327   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:22.429345   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:22.511591   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:22.511623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.049119   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:25.063910   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:25.063970   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:25.099795   67149 cri.go:89] found id: ""
	I1028 18:32:25.099822   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.099833   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:25.099840   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:25.099898   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:25.137957   67149 cri.go:89] found id: ""
	I1028 18:32:25.137985   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.137995   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:25.138002   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:25.138063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:25.174687   67149 cri.go:89] found id: ""
	I1028 18:32:25.174715   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.174726   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:25.174733   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:25.174795   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:25.207039   67149 cri.go:89] found id: ""
	I1028 18:32:25.207067   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.207077   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:25.207084   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:25.207130   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:25.239961   67149 cri.go:89] found id: ""
	I1028 18:32:25.239990   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.239998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:25.240004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:25.240055   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:25.273823   67149 cri.go:89] found id: ""
	I1028 18:32:25.273848   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.273858   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:25.273865   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:25.273925   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:25.310725   67149 cri.go:89] found id: ""
	I1028 18:32:25.310754   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.310765   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:25.310772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:25.310830   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:25.348724   67149 cri.go:89] found id: ""
	I1028 18:32:25.348749   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.348760   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:25.348770   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:25.348784   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:25.430213   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:25.430243   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.472233   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:25.472263   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:25.525648   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:25.525676   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:25.538697   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:25.538721   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:25.606779   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:23.612901   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.112494   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:23.902733   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.402271   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:25.372428   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:27.870461   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:29.871824   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.107877   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:28.122241   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:28.122296   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:28.157042   67149 cri.go:89] found id: ""
	I1028 18:32:28.157070   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.157082   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:28.157089   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:28.157142   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:28.190625   67149 cri.go:89] found id: ""
	I1028 18:32:28.190648   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.190658   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:28.190666   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:28.190724   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:28.224528   67149 cri.go:89] found id: ""
	I1028 18:32:28.224551   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.224559   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:28.224565   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:28.224609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:28.265073   67149 cri.go:89] found id: ""
	I1028 18:32:28.265100   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.265110   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:28.265116   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:28.265174   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:28.302598   67149 cri.go:89] found id: ""
	I1028 18:32:28.302623   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.302633   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:28.302640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:28.302697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:28.339757   67149 cri.go:89] found id: ""
	I1028 18:32:28.339781   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.339789   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:28.339794   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:28.339846   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:28.375185   67149 cri.go:89] found id: ""
	I1028 18:32:28.375213   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.375224   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:28.375231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:28.375294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:28.413292   67149 cri.go:89] found id: ""
	I1028 18:32:28.413316   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.413334   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:28.413344   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:28.413376   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:28.464069   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:28.464098   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:28.478275   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:28.478299   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:28.546483   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.546504   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:28.546515   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:28.623015   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:28.623041   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:28.613303   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.111518   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.403789   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:30.903113   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:32.371951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:34.372820   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.161570   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:31.175056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:31.175119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:31.210163   67149 cri.go:89] found id: ""
	I1028 18:32:31.210187   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.210199   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:31.210207   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:31.210264   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:31.244605   67149 cri.go:89] found id: ""
	I1028 18:32:31.244630   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.244637   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:31.244643   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:31.244688   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:31.280793   67149 cri.go:89] found id: ""
	I1028 18:32:31.280818   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.280827   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:31.280833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:31.280890   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:31.314616   67149 cri.go:89] found id: ""
	I1028 18:32:31.314641   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.314649   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:31.314654   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:31.314709   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:31.349386   67149 cri.go:89] found id: ""
	I1028 18:32:31.349410   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.349417   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:31.349423   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:31.349469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:31.382831   67149 cri.go:89] found id: ""
	I1028 18:32:31.382861   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.382871   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:31.382879   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:31.382924   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:31.417365   67149 cri.go:89] found id: ""
	I1028 18:32:31.417391   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.417400   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:31.417410   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:31.417469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:31.450631   67149 cri.go:89] found id: ""
	I1028 18:32:31.450660   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.450672   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:31.450683   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:31.450697   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.488932   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:31.488959   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:31.539335   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:31.539361   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:31.552304   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:31.552328   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:31.629291   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:31.629308   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:31.629323   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.207517   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:34.221231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:34.221310   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:34.255342   67149 cri.go:89] found id: ""
	I1028 18:32:34.255365   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.255373   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:34.255379   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:34.255438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:34.303802   67149 cri.go:89] found id: ""
	I1028 18:32:34.303827   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.303836   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:34.303843   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:34.303896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:34.339531   67149 cri.go:89] found id: ""
	I1028 18:32:34.339568   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.339579   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:34.339589   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:34.339653   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:34.374063   67149 cri.go:89] found id: ""
	I1028 18:32:34.374084   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.374094   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:34.374102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:34.374155   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:34.410880   67149 cri.go:89] found id: ""
	I1028 18:32:34.410909   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.410918   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:34.410924   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:34.410971   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:34.445372   67149 cri.go:89] found id: ""
	I1028 18:32:34.445397   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.445408   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:34.445416   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:34.445474   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:34.477820   67149 cri.go:89] found id: ""
	I1028 18:32:34.477844   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.477851   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:34.477857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:34.477909   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:34.517581   67149 cri.go:89] found id: ""
	I1028 18:32:34.517602   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.517609   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:34.517618   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:34.517632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:34.530407   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:34.530430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:34.599055   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:34.599083   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:34.599096   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.681579   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:34.681612   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:34.720523   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:34.720550   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:33.111858   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.112216   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.613521   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:33.401782   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.402544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.901848   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:36.871451   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.372642   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.272697   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:37.289091   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:37.289159   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:37.321600   67149 cri.go:89] found id: ""
	I1028 18:32:37.321628   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.321639   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:37.321647   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:37.321704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:37.353296   67149 cri.go:89] found id: ""
	I1028 18:32:37.353324   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.353337   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:37.353343   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:37.353400   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:37.386299   67149 cri.go:89] found id: ""
	I1028 18:32:37.386321   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.386328   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:37.386333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:37.386401   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:37.420992   67149 cri.go:89] found id: ""
	I1028 18:32:37.421026   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.421039   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:37.421047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:37.421117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:37.456174   67149 cri.go:89] found id: ""
	I1028 18:32:37.456206   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.456217   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:37.456224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:37.456284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:37.491796   67149 cri.go:89] found id: ""
	I1028 18:32:37.491819   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.491827   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:37.491833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:37.491878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:37.529002   67149 cri.go:89] found id: ""
	I1028 18:32:37.529028   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.529039   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:37.529047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:37.529111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:37.568967   67149 cri.go:89] found id: ""
	I1028 18:32:37.568993   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.569001   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:37.569010   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:37.569022   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:37.640041   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:37.640065   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:37.640076   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:37.725490   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:37.725524   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:37.771858   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:37.771879   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.821240   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:37.821271   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.334946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:40.349147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:40.349216   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:40.383931   67149 cri.go:89] found id: ""
	I1028 18:32:40.383956   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.383966   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:40.383973   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:40.384028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:40.419877   67149 cri.go:89] found id: ""
	I1028 18:32:40.419905   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.419915   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:40.419922   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:40.419978   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:40.453659   67149 cri.go:89] found id: ""
	I1028 18:32:40.453681   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.453689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:40.453695   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:40.453744   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:40.486299   67149 cri.go:89] found id: ""
	I1028 18:32:40.486326   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.486343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:40.486350   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:40.486407   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:40.518309   67149 cri.go:89] found id: ""
	I1028 18:32:40.518334   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.518344   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:40.518351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:40.518402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:40.549008   67149 cri.go:89] found id: ""
	I1028 18:32:40.549040   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.549049   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:40.549055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:40.549108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:40.586157   67149 cri.go:89] found id: ""
	I1028 18:32:40.586177   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.586184   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:40.586189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:40.586232   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:40.621107   67149 cri.go:89] found id: ""
	I1028 18:32:40.621133   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.621144   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:40.621153   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:40.621164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.633793   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:40.633816   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:40.700370   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:40.700393   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:40.700405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:40.780964   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:40.780993   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:40.819904   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:40.819928   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:40.112755   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:42.113116   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.903476   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.904639   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.872360   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.371399   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:43.371487   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:43.384387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:43.384445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:43.419889   67149 cri.go:89] found id: ""
	I1028 18:32:43.419922   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.419931   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:43.419937   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:43.419997   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:43.455177   67149 cri.go:89] found id: ""
	I1028 18:32:43.455209   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.455219   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:43.455227   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:43.455295   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:43.493070   67149 cri.go:89] found id: ""
	I1028 18:32:43.493094   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.493104   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:43.493111   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:43.493170   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:43.526164   67149 cri.go:89] found id: ""
	I1028 18:32:43.526191   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.526199   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:43.526205   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:43.526254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:43.559225   67149 cri.go:89] found id: ""
	I1028 18:32:43.559252   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.559263   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:43.559270   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:43.559323   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:43.597178   67149 cri.go:89] found id: ""
	I1028 18:32:43.597198   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.597206   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:43.597212   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:43.597276   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:43.633179   67149 cri.go:89] found id: ""
	I1028 18:32:43.633200   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.633209   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:43.633214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:43.633290   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:43.669567   67149 cri.go:89] found id: ""
	I1028 18:32:43.669596   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.669605   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:43.669615   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:43.669631   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:43.737618   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:43.737638   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:43.737650   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:43.821394   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:43.821425   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:43.859924   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:43.859950   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.913539   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:43.913566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:44.611539   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.613781   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.401399   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.401930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.371445   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.372075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.429021   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:46.443137   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:46.443197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:46.480363   67149 cri.go:89] found id: ""
	I1028 18:32:46.480385   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.480394   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:46.480400   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:46.480452   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:46.514702   67149 cri.go:89] found id: ""
	I1028 18:32:46.514731   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.514738   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:46.514744   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:46.514796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:46.546829   67149 cri.go:89] found id: ""
	I1028 18:32:46.546857   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.546868   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:46.546874   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:46.546920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:46.580372   67149 cri.go:89] found id: ""
	I1028 18:32:46.580398   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.580407   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:46.580415   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:46.580491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:46.615455   67149 cri.go:89] found id: ""
	I1028 18:32:46.615479   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.615489   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:46.615497   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:46.615556   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:46.649547   67149 cri.go:89] found id: ""
	I1028 18:32:46.649570   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.649577   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:46.649583   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:46.649641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:46.684744   67149 cri.go:89] found id: ""
	I1028 18:32:46.684768   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.684779   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:46.684787   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:46.684852   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:46.725530   67149 cri.go:89] found id: ""
	I1028 18:32:46.725558   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.725569   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:46.725578   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:46.725592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:46.794487   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:46.794506   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:46.794517   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:46.881407   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:46.881438   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:46.921649   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:46.921671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:46.972915   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:46.972947   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.486835   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:49.501445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:49.501509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:49.537356   67149 cri.go:89] found id: ""
	I1028 18:32:49.537377   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.537384   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:49.537389   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:49.537443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:49.568514   67149 cri.go:89] found id: ""
	I1028 18:32:49.568541   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.568549   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:49.568555   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:49.568610   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:49.602300   67149 cri.go:89] found id: ""
	I1028 18:32:49.602324   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.602333   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:49.602342   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:49.602390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:49.640326   67149 cri.go:89] found id: ""
	I1028 18:32:49.640356   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.640366   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:49.640376   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:49.640437   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:49.675145   67149 cri.go:89] found id: ""
	I1028 18:32:49.675175   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.675183   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:49.675189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:49.675235   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:49.711104   67149 cri.go:89] found id: ""
	I1028 18:32:49.711129   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.711139   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:49.711147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:49.711206   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:49.748316   67149 cri.go:89] found id: ""
	I1028 18:32:49.748366   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.748378   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:49.748385   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:49.748441   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:49.781620   67149 cri.go:89] found id: ""
	I1028 18:32:49.781646   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.781656   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:49.781665   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:49.781679   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.795119   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:49.795143   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:49.870438   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:49.870519   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:49.870539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:49.956845   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:49.956875   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:49.993067   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:49.993097   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:49.112102   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:51.612691   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.901950   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.902354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.903627   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.871412   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.871499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:54.874588   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.543260   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:52.556524   67149 kubeadm.go:597] duration metric: took 4m2.404527005s to restartPrimaryControlPlane
	W1028 18:32:52.556602   67149 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:52.556639   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:32:53.011065   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:32:53.026226   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:32:53.035868   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:32:53.045257   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:32:53.045271   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:32:53.045302   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:32:53.054383   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:32:53.054430   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:32:53.063665   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:32:53.073006   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:32:53.073054   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:32:53.083156   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.092700   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:32:53.092742   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.102374   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:32:53.112072   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:32:53.112121   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:32:53.122102   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:32:53.347625   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:32:53.613118   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:56.111841   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:55.402354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.902406   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.371909   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:59.872630   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.112962   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:00.613499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.896006   66801 pod_ready.go:82] duration metric: took 4m0.00005957s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	E1028 18:32:58.896033   66801 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:32:58.896052   66801 pod_ready.go:39] duration metric: took 4m13.055181811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:32:58.896092   66801 kubeadm.go:597] duration metric: took 4m21.540757653s to restartPrimaryControlPlane
	W1028 18:32:58.896147   66801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:58.896173   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:02.372443   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:04.871981   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:03.113038   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:05.114488   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:07.612365   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:06.872705   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.371018   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.612856   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:12.114228   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:11.371831   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:13.372636   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:14.613213   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.113328   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:15.871907   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.872203   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:19.612892   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:21.613052   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:20.370964   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:22.371880   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:24.372718   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:25.039296   66801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.14309835s)
	I1028 18:33:25.039378   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:25.056172   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:25.066775   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:25.077717   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:25.077734   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:25.077770   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:33:25.086924   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:25.086968   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:25.096867   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:33:25.106162   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:25.106205   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:25.117015   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.126191   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:25.126245   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.135691   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:33:25.144827   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:25.144867   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:25.153834   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:25.201789   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:33:25.201866   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:33:25.306568   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:33:25.306717   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:33:25.306845   66801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:33:25.314339   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:33:25.316173   66801 out.go:235]   - Generating certificates and keys ...
	I1028 18:33:25.316271   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:33:25.316345   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:33:25.316463   66801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:33:25.316571   66801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:33:25.316688   66801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:33:25.316768   66801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:33:25.316857   66801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:33:25.316943   66801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:33:25.317047   66801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:33:25.317149   66801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:33:25.317209   66801 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:33:25.317299   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:33:25.643056   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:33:25.723345   66801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:33:25.831628   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:33:25.908255   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:33:26.215149   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:33:26.215654   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:33:26.218291   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:33:24.111834   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.113295   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.220065   66801 out.go:235]   - Booting up control plane ...
	I1028 18:33:26.220170   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:33:26.220251   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:33:26.220336   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:33:26.239633   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:33:26.245543   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:33:26.245612   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:33:26.378154   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:33:26.378332   66801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:33:26.879957   66801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.937575ms
	I1028 18:33:26.880090   66801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:33:26.365771   67489 pod_ready.go:82] duration metric: took 4m0.000286415s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:26.365796   67489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:26.365812   67489 pod_ready.go:39] duration metric: took 4m12.539631154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:26.365837   67489 kubeadm.go:597] duration metric: took 4m19.835720994s to restartPrimaryControlPlane
	W1028 18:33:26.365884   67489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:26.365910   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:31.882091   66801 kubeadm.go:310] [api-check] The API server is healthy after 5.002114527s
	I1028 18:33:31.897915   66801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:33:31.914311   66801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:33:31.943604   66801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:33:31.943859   66801 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-051152 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:33:31.954350   66801 kubeadm.go:310] [bootstrap-token] Using token: h7eyzq.87sgylc03ke6zhfy
	I1028 18:33:28.613480   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.113034   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.955444   66801 out.go:235]   - Configuring RBAC rules ...
	I1028 18:33:31.955591   66801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:33:31.960749   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:33:31.967695   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:33:31.970863   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:33:31.973924   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:33:31.979191   66801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:33:32.291512   66801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:33:32.714999   66801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:33:33.291889   66801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:33:33.293069   66801 kubeadm.go:310] 
	I1028 18:33:33.293167   66801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:33:33.293182   66801 kubeadm.go:310] 
	I1028 18:33:33.293255   66801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:33:33.293268   66801 kubeadm.go:310] 
	I1028 18:33:33.293307   66801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:33:33.293372   66801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:33:33.293435   66801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:33:33.293447   66801 kubeadm.go:310] 
	I1028 18:33:33.293518   66801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:33:33.293526   66801 kubeadm.go:310] 
	I1028 18:33:33.293595   66801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:33:33.293624   66801 kubeadm.go:310] 
	I1028 18:33:33.293712   66801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:33:33.293842   66801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:33:33.293946   66801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:33:33.293960   66801 kubeadm.go:310] 
	I1028 18:33:33.294117   66801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:33:33.294196   66801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:33:33.294203   66801 kubeadm.go:310] 
	I1028 18:33:33.294276   66801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294385   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:33:33.294414   66801 kubeadm.go:310] 	--control-plane 
	I1028 18:33:33.294427   66801 kubeadm.go:310] 
	I1028 18:33:33.294515   66801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:33:33.294525   66801 kubeadm.go:310] 
	I1028 18:33:33.294629   66801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294774   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:33:33.295715   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:33:33.295839   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:33:33.295852   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:33:33.297447   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:33:33.298607   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:33:33.311113   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:33:33.329576   66801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:33:33.329634   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:33.329680   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-051152 minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=no-preload-051152 minikube.k8s.io/primary=true
	I1028 18:33:33.355186   66801 ops.go:34] apiserver oom_adj: -16
	I1028 18:33:33.509281   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.009672   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.509515   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.010084   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.509359   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.009689   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.509671   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.009884   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.510004   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.615853   66801 kubeadm.go:1113] duration metric: took 4.286272328s to wait for elevateKubeSystemPrivileges
	I1028 18:33:37.615890   66801 kubeadm.go:394] duration metric: took 5m0.313982235s to StartCluster
	I1028 18:33:37.615913   66801 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.616000   66801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:33:37.618418   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.618741   66801 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:33:37.618857   66801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:33:37.618951   66801 addons.go:69] Setting storage-provisioner=true in profile "no-preload-051152"
	I1028 18:33:37.618963   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:33:37.618975   66801 addons.go:69] Setting default-storageclass=true in profile "no-preload-051152"
	I1028 18:33:37.619001   66801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-051152"
	I1028 18:33:37.618973   66801 addons.go:234] Setting addon storage-provisioner=true in "no-preload-051152"
	W1028 18:33:37.619019   66801 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:33:37.619012   66801 addons.go:69] Setting metrics-server=true in profile "no-preload-051152"
	I1028 18:33:37.619043   66801 addons.go:234] Setting addon metrics-server=true in "no-preload-051152"
	I1028 18:33:37.619047   66801 host.go:66] Checking if "no-preload-051152" exists ...
	W1028 18:33:37.619056   66801 addons.go:243] addon metrics-server should already be in state true
	I1028 18:33:37.619097   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.619417   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619446   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619472   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619488   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619487   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619521   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.620738   66801 out.go:177] * Verifying Kubernetes components...
	I1028 18:33:37.622165   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:33:37.636006   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1028 18:33:37.636285   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I1028 18:33:37.636536   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.636621   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.637055   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637082   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637344   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637368   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637419   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637634   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637811   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.638112   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.638157   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.638738   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1028 18:33:37.639176   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.639609   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.639632   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.639918   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.640333   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.640375   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.641571   66801 addons.go:234] Setting addon default-storageclass=true in "no-preload-051152"
	W1028 18:33:37.641592   66801 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:33:37.641620   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.641947   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.641981   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.657758   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I1028 18:33:37.657834   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I1028 18:33:37.657942   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I1028 18:33:37.658187   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658335   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658739   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658752   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658877   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658896   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658931   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.659309   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659358   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659409   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.659428   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.659552   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.659934   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.659964   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.660163   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.660406   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.661568   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.662429   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.663435   66801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:33:37.664414   66801 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:33:33.613699   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:36.111831   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:37.665306   66801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.665324   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:33:37.665343   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.666055   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:33:37.666073   66801 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:33:37.666092   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.668918   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669385   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669519   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.669543   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669754   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.669942   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.670093   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.670266   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.670513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.670556   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.670719   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.670851   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.671014   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.671115   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.677419   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I1028 18:33:37.677828   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.678184   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.678201   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.678476   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.678686   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.680177   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.680403   66801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.680420   66801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:33:37.680437   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.683981   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.684534   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.685007   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.685153   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.685307   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.832104   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:33:37.859406   66801 node_ready.go:35] waiting up to 6m0s for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873437   66801 node_ready.go:49] node "no-preload-051152" has status "Ready":"True"
	I1028 18:33:37.873460   66801 node_ready.go:38] duration metric: took 14.023686ms for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873470   66801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:37.888286   66801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:37.917341   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:33:37.917363   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:33:37.948690   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:33:37.948716   66801 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:33:37.967948   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.971737   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.998758   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:37.998782   66801 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:33:38.034907   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:38.924695   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924720   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.924762   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924828   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925048   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925079   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925093   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925105   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925128   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925131   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925142   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925153   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925154   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925164   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925372   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925397   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925382   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926852   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926857   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.926872   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.955462   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.955492   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.955858   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.955938   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.955953   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373144   66801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.338192413s)
	I1028 18:33:39.373209   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373224   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373512   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373529   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373537   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373544   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373761   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373775   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373785   66801 addons.go:475] Verifying addon metrics-server=true in "no-preload-051152"
	I1028 18:33:39.375584   66801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:33:38.113078   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:40.612141   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.612763   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:39.377031   66801 addons.go:510] duration metric: took 1.758176418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:33:39.906691   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.396083   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:44.894264   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:46.396937   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.397023   66801 pod_ready.go:82] duration metric: took 8.508709164s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.397048   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402560   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.402579   66801 pod_ready.go:82] duration metric: took 5.5155ms for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402588   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406630   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.406646   66801 pod_ready.go:82] duration metric: took 4.052513ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406654   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411238   66801 pod_ready.go:93] pod "kube-proxy-28qht" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.411253   66801 pod_ready.go:82] duration metric: took 4.592983ms for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411260   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414867   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.414880   66801 pod_ready.go:82] duration metric: took 3.615132ms for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414886   66801 pod_ready.go:39] duration metric: took 8.541406133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:46.414900   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:33:46.414943   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:33:46.430889   66801 api_server.go:72] duration metric: took 8.81211088s to wait for apiserver process to appear ...
	I1028 18:33:46.430907   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:33:46.430925   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:33:46.435248   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:33:46.435963   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:33:46.435978   66801 api_server.go:131] duration metric: took 5.065719ms to wait for apiserver health ...
	I1028 18:33:46.435984   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:33:46.596186   66801 system_pods.go:59] 9 kube-system pods found
	I1028 18:33:46.596222   66801 system_pods.go:61] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.596230   66801 system_pods.go:61] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.596234   66801 system_pods.go:61] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.596238   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.596242   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.596246   66801 system_pods.go:61] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.596252   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.596301   66801 system_pods.go:61] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.596317   66801 system_pods.go:61] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.596324   66801 system_pods.go:74] duration metric: took 160.335823ms to wait for pod list to return data ...
	I1028 18:33:46.596341   66801 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:33:46.793115   66801 default_sa.go:45] found service account: "default"
	I1028 18:33:46.793147   66801 default_sa.go:55] duration metric: took 196.795286ms for default service account to be created ...
	I1028 18:33:46.793157   66801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:33:46.995868   66801 system_pods.go:86] 9 kube-system pods found
	I1028 18:33:46.995899   66801 system_pods.go:89] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.995905   66801 system_pods.go:89] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.995909   66801 system_pods.go:89] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.995912   66801 system_pods.go:89] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.995917   66801 system_pods.go:89] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.995920   66801 system_pods.go:89] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.995924   66801 system_pods.go:89] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.995929   66801 system_pods.go:89] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.995934   66801 system_pods.go:89] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.995941   66801 system_pods.go:126] duration metric: took 202.778451ms to wait for k8s-apps to be running ...
	I1028 18:33:46.995946   66801 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:33:46.995990   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:47.011260   66801 system_svc.go:56] duration metric: took 15.302599ms WaitForService to wait for kubelet
	I1028 18:33:47.011285   66801 kubeadm.go:582] duration metric: took 9.392510785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:33:47.011303   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:33:47.193217   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:33:47.193239   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:33:47.193250   66801 node_conditions.go:105] duration metric: took 181.942948ms to run NodePressure ...
	I1028 18:33:47.193261   66801 start.go:241] waiting for startup goroutines ...
	I1028 18:33:47.193267   66801 start.go:246] waiting for cluster config update ...
	I1028 18:33:47.193278   66801 start.go:255] writing updated cluster config ...
	I1028 18:33:47.193529   66801 ssh_runner.go:195] Run: rm -f paused
	I1028 18:33:47.240247   66801 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:33:47.242139   66801 out.go:177] * Done! kubectl is now configured to use "no-preload-051152" cluster and "default" namespace by default
	I1028 18:33:45.112037   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:47.112764   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:48.107354   66600 pod_ready.go:82] duration metric: took 4m0.001062902s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:48.107377   66600 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:48.107395   66600 pod_ready.go:39] duration metric: took 4m13.535788316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:48.107420   66600 kubeadm.go:597] duration metric: took 4m22.316644235s to restartPrimaryControlPlane
	W1028 18:33:48.107467   66600 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:48.107490   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:52.667497   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.301566887s)
	I1028 18:33:52.667559   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:52.683580   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:52.695334   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:52.705505   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:52.705524   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:52.705569   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:33:52.714922   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:52.714969   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:52.724156   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:33:52.733125   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:52.733161   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:52.742369   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.751021   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:52.751065   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.760543   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:33:52.770939   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:52.770985   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:52.781890   67489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:52.961562   67489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:01.798408   67489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:01.798470   67489 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:01.798580   67489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:01.798724   67489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:01.798811   67489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:01.798882   67489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:01.800228   67489 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:01.800320   67489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:01.800392   67489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:01.800486   67489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:01.800580   67489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:01.800641   67489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:01.800694   67489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:01.800764   67489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:01.800842   67489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:01.800955   67489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:01.801019   67489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:01.801053   67489 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:01.801102   67489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:01.801145   67489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:01.801196   67489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:01.801252   67489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:01.801316   67489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:01.801409   67489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:01.801513   67489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:01.801605   67489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:01.802967   67489 out.go:235]   - Booting up control plane ...
	I1028 18:34:01.803061   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:01.803169   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:01.803254   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:01.803376   67489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:01.803488   67489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:01.803558   67489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:01.803685   67489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:01.803800   67489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:01.803869   67489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.148945ms
	I1028 18:34:01.803933   67489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:01.803986   67489 kubeadm.go:310] [api-check] The API server is healthy after 5.003798359s
	I1028 18:34:01.804081   67489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:01.804187   67489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:01.804240   67489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:01.804438   67489 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-692033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:01.804533   67489 kubeadm.go:310] [bootstrap-token] Using token: wy8zqj.38m6tcr6hp7sgzod
	I1028 18:34:01.805760   67489 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:01.805856   67489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:01.805949   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:01.806108   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:01.806233   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:01.806378   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:01.806464   67489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:01.806579   67489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:01.806633   67489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:01.806673   67489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:01.806679   67489 kubeadm.go:310] 
	I1028 18:34:01.806735   67489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:01.806746   67489 kubeadm.go:310] 
	I1028 18:34:01.806836   67489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:01.806844   67489 kubeadm.go:310] 
	I1028 18:34:01.806880   67489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:01.806957   67489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:01.807001   67489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:01.807007   67489 kubeadm.go:310] 
	I1028 18:34:01.807060   67489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:01.807071   67489 kubeadm.go:310] 
	I1028 18:34:01.807112   67489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:01.807118   67489 kubeadm.go:310] 
	I1028 18:34:01.807171   67489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:01.807246   67489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:01.807307   67489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:01.807313   67489 kubeadm.go:310] 
	I1028 18:34:01.807387   67489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:01.807454   67489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:01.807465   67489 kubeadm.go:310] 
	I1028 18:34:01.807538   67489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807634   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:01.807655   67489 kubeadm.go:310] 	--control-plane 
	I1028 18:34:01.807661   67489 kubeadm.go:310] 
	I1028 18:34:01.807730   67489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:01.807739   67489 kubeadm.go:310] 
	I1028 18:34:01.807810   67489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807913   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:01.807923   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:34:01.807929   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:01.809168   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:01.810293   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:01.822030   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:01.842831   67489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:01.842908   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:01.842963   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-692033 minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=default-k8s-diff-port-692033 minikube.k8s.io/primary=true
	I1028 18:34:01.875265   67489 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:02.050422   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:02.550824   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.050477   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.551245   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.051177   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.550572   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.051071   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.550926   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.638447   67489 kubeadm.go:1113] duration metric: took 3.795598924s to wait for elevateKubeSystemPrivileges
	I1028 18:34:05.638483   67489 kubeadm.go:394] duration metric: took 4m59.162037455s to StartCluster
	I1028 18:34:05.638504   67489 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.638591   67489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:05.641196   67489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.641497   67489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:05.641626   67489 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:05.641720   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:05.641730   67489 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641748   67489 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641760   67489 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:05.641776   67489 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641781   67489 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641792   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.641794   67489 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641803   67489 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:05.641804   67489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-692033"
	I1028 18:34:05.641832   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.642210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642217   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642229   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642245   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642255   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642314   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642905   67489 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:05.644361   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:05.658478   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I1028 18:34:05.658586   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1028 18:34:05.659040   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659044   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659524   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659546   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659701   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659724   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659879   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660044   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660111   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.660610   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.660648   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.661748   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1028 18:34:05.662150   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.662607   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.662627   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.662983   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.662991   67489 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.663006   67489 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:05.663029   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.663294   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663334   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.663531   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663572   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.675955   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1028 18:34:05.676345   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.676784   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.676802   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.677154   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.677358   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.678723   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I1028 18:34:05.678897   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1028 18:34:05.679025   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.679243   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679337   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679700   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679715   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.679805   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679823   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.680500   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680506   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680706   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.680834   67489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:05.681042   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.681070   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.681982   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:05.682005   67489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:05.682035   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.682363   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.683806   67489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:05.684992   67489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.685011   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:05.685029   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.686903   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.686957   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.686973   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.687218   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.687429   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.687693   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.687850   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.688516   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.688908   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.688933   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.689193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.689372   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.689513   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.689655   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.696743   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I1028 18:34:05.697029   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.697432   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.697458   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.697697   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.697843   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.699192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.699397   67489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.699405   67489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:05.699416   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.702897   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.703368   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703483   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.703667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.703841   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.703996   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.838049   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:05.857829   67489 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866141   67489 node_ready.go:49] node "default-k8s-diff-port-692033" has status "Ready":"True"
	I1028 18:34:05.866158   67489 node_ready.go:38] duration metric: took 8.296617ms for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866167   67489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:05.873027   67489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:05.927585   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:05.927608   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:05.928743   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.946390   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.961712   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:05.961734   67489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:05.993688   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:05.993711   67489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:06.097871   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:06.696189   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696226   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696195   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696300   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696696   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696713   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696697   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696721   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696735   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696742   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696750   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696722   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696794   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696984   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697000   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.697027   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697036   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.720324   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.720346   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.720649   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.720668   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262166   67489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.164245646s)
	I1028 18:34:07.262256   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262277   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262587   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262608   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262607   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262616   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262625   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262890   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262923   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262936   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262948   67489 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-692033"
	I1028 18:34:07.264414   67489 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:07.265449   67489 addons.go:510] duration metric: took 1.623834435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:07.882264   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.313629   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.206119005s)
	I1028 18:34:14.313702   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:14.329212   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:34:14.339407   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:14.349645   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:14.349669   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:14.349716   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:14.359332   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:14.359384   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:14.369627   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:14.381040   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:14.381098   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:14.390359   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.399743   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:14.399783   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.408932   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:14.417840   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:14.417876   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:14.427234   66600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:14.472502   66600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:14.472593   66600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:14.578311   66600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:14.578456   66600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:14.578576   66600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:14.586748   66600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:10.380304   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:12.878632   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.878951   67489 pod_ready.go:93] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:14.878974   67489 pod_ready.go:82] duration metric: took 9.005915421s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:14.878983   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385215   67489 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.385239   67489 pod_ready.go:82] duration metric: took 506.249352ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385250   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390412   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.390435   67489 pod_ready.go:82] duration metric: took 5.177559ms for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390448   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395252   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.395272   67489 pod_ready.go:82] duration metric: took 4.816812ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395281   67489 pod_ready.go:39] duration metric: took 9.52910413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:15.395298   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:15.395349   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:15.413693   67489 api_server.go:72] duration metric: took 9.772160727s to wait for apiserver process to appear ...
	I1028 18:34:15.413715   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:15.413734   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:34:15.417780   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:34:15.418688   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:15.418712   67489 api_server.go:131] duration metric: took 4.989226ms to wait for apiserver health ...
	I1028 18:34:15.418720   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:15.424285   67489 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:15.424306   67489 system_pods.go:61] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.424310   67489 system_pods.go:61] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.424315   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.424318   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.424323   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.424327   67489 system_pods.go:61] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.424331   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.424337   67489 system_pods.go:61] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.424344   67489 system_pods.go:61] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.424351   67489 system_pods.go:74] duration metric: took 5.625205ms to wait for pod list to return data ...
	I1028 18:34:15.424359   67489 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:15.427132   67489 default_sa.go:45] found service account: "default"
	I1028 18:34:15.427153   67489 default_sa.go:55] duration metric: took 2.788005ms for default service account to be created ...
	I1028 18:34:15.427161   67489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:15.479404   67489 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:15.479427   67489 system_pods.go:89] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.479433   67489 system_pods.go:89] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.479436   67489 system_pods.go:89] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.479443   67489 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.479448   67489 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.479453   67489 system_pods.go:89] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.479460   67489 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.479472   67489 system_pods.go:89] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.479477   67489 system_pods.go:89] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.479491   67489 system_pods.go:126] duration metric: took 52.324012ms to wait for k8s-apps to be running ...
	I1028 18:34:15.479502   67489 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:15.479548   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:15.493743   67489 system_svc.go:56] duration metric: took 14.233947ms WaitForService to wait for kubelet
	I1028 18:34:15.493772   67489 kubeadm.go:582] duration metric: took 9.852243286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:15.493796   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:15.677127   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:15.677149   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:15.677156   67489 node_conditions.go:105] duration metric: took 183.355591ms to run NodePressure ...
	I1028 18:34:15.677167   67489 start.go:241] waiting for startup goroutines ...
	I1028 18:34:15.677174   67489 start.go:246] waiting for cluster config update ...
	I1028 18:34:15.677183   67489 start.go:255] writing updated cluster config ...
	I1028 18:34:15.677419   67489 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:15.731157   67489 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:15.732912   67489 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-692033" cluster and "default" namespace by default
	I1028 18:34:14.588528   66600 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:14.588660   66600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:14.588749   66600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:14.588886   66600 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:14.588985   66600 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:14.589089   66600 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:14.589179   66600 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:14.589268   66600 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:14.589362   66600 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:14.589472   66600 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:14.589575   66600 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:14.589638   66600 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:14.589739   66600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:14.902456   66600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:15.107236   66600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:15.198073   66600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:15.618175   66600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:15.804761   66600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:15.805675   66600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:15.809860   66600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:15.811538   66600 out.go:235]   - Booting up control plane ...
	I1028 18:34:15.811658   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:15.811761   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:15.812969   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:15.838182   66600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:15.846044   66600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:15.846126   66600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:15.981748   66600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:15.981899   66600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:16.483112   66600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262752ms
	I1028 18:34:16.483242   66600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:21.484655   66600 kubeadm.go:310] [api-check] The API server is healthy after 5.001327308s
	I1028 18:34:21.498067   66600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:21.508713   66600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:21.537520   66600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:21.537724   66600 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-021370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:21.551416   66600 kubeadm.go:310] [bootstrap-token] Using token: c2otm2.eh2uwearn2r38epe
	I1028 18:34:21.552613   66600 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:21.552721   66600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:21.556871   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:21.563570   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:21.566336   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:21.569226   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:21.575090   66600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:21.890874   66600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:22.315363   66600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:22.892050   66600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:22.892097   66600 kubeadm.go:310] 
	I1028 18:34:22.892198   66600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:22.892214   66600 kubeadm.go:310] 
	I1028 18:34:22.892297   66600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:22.892308   66600 kubeadm.go:310] 
	I1028 18:34:22.892346   66600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:22.892457   66600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:22.892549   66600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:22.892559   66600 kubeadm.go:310] 
	I1028 18:34:22.892628   66600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:22.892643   66600 kubeadm.go:310] 
	I1028 18:34:22.892705   66600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:22.892715   66600 kubeadm.go:310] 
	I1028 18:34:22.892784   66600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:22.892851   66600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:22.892958   66600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:22.892981   66600 kubeadm.go:310] 
	I1028 18:34:22.893093   66600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:22.893197   66600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:22.893212   66600 kubeadm.go:310] 
	I1028 18:34:22.893320   66600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893460   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:22.893506   66600 kubeadm.go:310] 	--control-plane 
	I1028 18:34:22.893515   66600 kubeadm.go:310] 
	I1028 18:34:22.893622   66600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:22.893631   66600 kubeadm.go:310] 
	I1028 18:34:22.893728   66600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893886   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:22.894813   66600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:22.895022   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:34:22.895037   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:22.897376   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:22.898532   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:22.909363   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:22.930151   66600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:22.930190   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:22.930280   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-021370 minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=embed-certs-021370 minikube.k8s.io/primary=true
	I1028 18:34:22.963249   66600 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:23.216574   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:23.717592   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.217674   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.717602   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.216832   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.717673   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.217668   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.716727   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.217476   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.343171   66600 kubeadm.go:1113] duration metric: took 4.413029537s to wait for elevateKubeSystemPrivileges
	I1028 18:34:27.343201   66600 kubeadm.go:394] duration metric: took 5m1.603783417s to StartCluster
	I1028 18:34:27.343221   66600 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.343302   66600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:27.344913   66600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.345149   66600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:27.345210   66600 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:27.345282   66600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-021370"
	I1028 18:34:27.345297   66600 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-021370"
	W1028 18:34:27.345304   66600 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:27.345310   66600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-021370"
	I1028 18:34:27.345339   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345337   66600 addons.go:69] Setting metrics-server=true in profile "embed-certs-021370"
	I1028 18:34:27.345353   66600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-021370"
	I1028 18:34:27.345360   66600 addons.go:234] Setting addon metrics-server=true in "embed-certs-021370"
	W1028 18:34:27.345369   66600 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:27.345381   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:27.345396   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345742   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345788   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345794   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345798   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.346770   66600 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:27.348169   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:27.361310   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1028 18:34:27.361763   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362073   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 18:34:27.362257   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.362292   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.362550   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362640   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363049   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.363079   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.363204   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.363242   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.363425   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363610   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.363934   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1028 18:34:27.364390   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.364865   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.364885   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.365229   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.365805   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.365852   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.367292   66600 addons.go:234] Setting addon default-storageclass=true in "embed-certs-021370"
	W1028 18:34:27.367314   66600 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:27.367347   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.367738   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.367782   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.381375   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1028 18:34:27.381846   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.382429   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.382441   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.382787   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.382926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.382965   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I1028 18:34:27.383568   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.384121   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.384134   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.384530   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.384730   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.384815   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386107   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I1028 18:34:27.386306   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386435   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.386888   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.386911   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.386977   66600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:27.387284   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.387866   66600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:27.387883   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.388259   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.388628   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:27.388645   66600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:27.388658   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.390614   66600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.390634   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:27.390650   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.393252   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393734   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.393758   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.394122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.394238   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.394364   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.394640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395084   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.395110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.395383   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.395540   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.395677   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.406551   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1028 18:34:27.406907   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.407358   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.407376   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.407699   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.407891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.409287   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.409489   66600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.409502   66600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:27.409517   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.412275   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412828   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.412858   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412984   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.413162   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.413303   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.413453   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.546891   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:27.571837   66600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595105   66600 node_ready.go:49] node "embed-certs-021370" has status "Ready":"True"
	I1028 18:34:27.595127   66600 node_ready.go:38] duration metric: took 23.255834ms for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595156   66600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:27.603107   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:27.635422   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.657051   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.666085   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:27.666110   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:27.706366   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:27.706394   66600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:27.772162   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:27.772191   66600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:27.844116   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:28.411454   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411478   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411522   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411544   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411751   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.411960   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.411982   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.411991   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411998   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.412223   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.412266   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413310   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413326   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413338   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.413344   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.413569   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413584   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.420867   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.420891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.421092   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.421168   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.421169   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957337   66600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.11317187s)
	I1028 18:34:28.957385   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957395   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957696   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957715   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957725   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957733   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957957   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957970   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957988   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957990   66600 addons.go:475] Verifying addon metrics-server=true in "embed-certs-021370"
	I1028 18:34:28.959590   66600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:28.961127   66600 addons.go:510] duration metric: took 1.615922156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:29.611126   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:32.110577   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:34.610544   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:37.111319   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.111342   66600 pod_ready.go:82] duration metric: took 9.508204126s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.111351   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119547   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.119571   66600 pod_ready.go:82] duration metric: took 8.212577ms for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119581   66600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126030   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.126048   66600 pod_ready.go:82] duration metric: took 6.46043ms for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126056   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132366   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.132386   66600 pod_ready.go:82] duration metric: took 6.323715ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132394   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137151   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.137171   66600 pod_ready.go:82] duration metric: took 4.770272ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137182   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507159   66600 pod_ready.go:93] pod "kube-proxy-nrr6g" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.507180   66600 pod_ready.go:82] duration metric: took 369.991591ms for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507189   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908006   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.908030   66600 pod_ready.go:82] duration metric: took 400.834669ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908038   66600 pod_ready.go:39] duration metric: took 10.312872321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:37.908052   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:37.908098   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:37.924515   66600 api_server.go:72] duration metric: took 10.579335154s to wait for apiserver process to appear ...
	I1028 18:34:37.924552   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:37.924572   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:34:37.929438   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:34:37.930716   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:37.930742   66600 api_server.go:131] duration metric: took 6.181503ms to wait for apiserver health ...
	I1028 18:34:37.930752   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:38.113401   66600 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:38.113430   66600 system_pods.go:61] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.113435   66600 system_pods.go:61] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.113439   66600 system_pods.go:61] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.113442   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.113446   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.113449   66600 system_pods.go:61] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.113452   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.113457   66600 system_pods.go:61] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.113462   66600 system_pods.go:61] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.113468   66600 system_pods.go:74] duration metric: took 182.711396ms to wait for pod list to return data ...
	I1028 18:34:38.113475   66600 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:38.309139   66600 default_sa.go:45] found service account: "default"
	I1028 18:34:38.309170   66600 default_sa.go:55] duration metric: took 195.688587ms for default service account to be created ...
	I1028 18:34:38.309182   66600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:38.510307   66600 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:38.510336   66600 system_pods.go:89] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.510341   66600 system_pods.go:89] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.510345   66600 system_pods.go:89] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.510349   66600 system_pods.go:89] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.510352   66600 system_pods.go:89] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.510355   66600 system_pods.go:89] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.510360   66600 system_pods.go:89] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.510368   66600 system_pods.go:89] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.510376   66600 system_pods.go:89] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.510391   66600 system_pods.go:126] duration metric: took 201.199416ms to wait for k8s-apps to be running ...
	I1028 18:34:38.510403   66600 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:38.510448   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:38.526043   66600 system_svc.go:56] duration metric: took 15.628796ms WaitForService to wait for kubelet
	I1028 18:34:38.526075   66600 kubeadm.go:582] duration metric: took 11.18089878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:38.526109   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:38.707568   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:38.707594   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:38.707604   66600 node_conditions.go:105] duration metric: took 181.491056ms to run NodePressure ...
	I1028 18:34:38.707615   66600 start.go:241] waiting for startup goroutines ...
	I1028 18:34:38.707621   66600 start.go:246] waiting for cluster config update ...
	I1028 18:34:38.707631   66600 start.go:255] writing updated cluster config ...
	I1028 18:34:38.707950   66600 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:38.755355   66600 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:38.757256   66600 out.go:177] * Done! kubectl is now configured to use "embed-certs-021370" cluster and "default" namespace by default
	I1028 18:34:49.381931   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:34:49.382111   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:34:49.383570   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:34:49.383633   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:49.383732   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:49.383859   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:49.383975   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:34:49.384073   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:49.385654   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:49.385757   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:49.385847   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:49.385937   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:49.386008   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:49.386118   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:49.386214   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:49.386316   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:49.386391   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:49.386478   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:49.386597   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:49.386643   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:49.386724   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:49.386813   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:49.386891   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:49.386983   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:49.387070   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:49.387209   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:49.387330   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:49.387389   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:49.387474   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:49.389653   67149 out.go:235]   - Booting up control plane ...
	I1028 18:34:49.389760   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:49.389867   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:49.389971   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:49.390088   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:49.390228   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:34:49.390277   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:34:49.390355   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390550   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390645   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390832   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390903   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391069   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391163   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391354   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391452   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391649   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391657   67149 kubeadm.go:310] 
	I1028 18:34:49.391691   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:34:49.391743   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:34:49.391758   67149 kubeadm.go:310] 
	I1028 18:34:49.391789   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:34:49.391822   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:34:49.391908   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:34:49.391914   67149 kubeadm.go:310] 
	I1028 18:34:49.392024   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:34:49.392073   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:34:49.392133   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:34:49.392142   67149 kubeadm.go:310] 
	I1028 18:34:49.392267   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:34:49.392363   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:34:49.392380   67149 kubeadm.go:310] 
	I1028 18:34:49.392525   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:34:49.392629   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:34:49.392737   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:34:49.392830   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:34:49.392879   67149 kubeadm.go:310] 
	W1028 18:34:49.392949   67149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:34:49.392991   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:34:49.869859   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:49.884524   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:49.896293   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:49.896318   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:49.896354   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:49.907312   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:49.907364   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:49.917926   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:49.928001   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:49.928048   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:49.938687   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.949217   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:49.949268   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.959955   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:49.970105   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:49.970156   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:49.980760   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:50.212973   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:36:46.686631   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:36:46.686753   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:36:46.688224   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:36:46.688325   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:36:46.688449   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:36:46.688587   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:36:46.688726   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:36:46.688813   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:36:46.690320   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:36:46.690427   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:36:46.690524   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:36:46.690627   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:36:46.690720   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:36:46.690824   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:36:46.690897   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:36:46.690984   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:36:46.691064   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:36:46.691161   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:36:46.691253   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:36:46.691309   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:36:46.691379   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:36:46.691426   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:36:46.691471   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:36:46.691547   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:36:46.691619   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:36:46.691713   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:36:46.691814   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:36:46.691864   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:36:46.691951   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:36:46.693258   67149 out.go:235]   - Booting up control plane ...
	I1028 18:36:46.693374   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:36:46.693471   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:36:46.693566   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:36:46.693682   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:36:46.693870   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:36:46.693930   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:36:46.694023   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694253   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694343   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694527   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694614   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694798   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694894   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695053   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695119   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695315   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695324   67149 kubeadm.go:310] 
	I1028 18:36:46.695357   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:36:46.695392   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:36:46.695398   67149 kubeadm.go:310] 
	I1028 18:36:46.695427   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:36:46.695456   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:36:46.695542   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:36:46.695549   67149 kubeadm.go:310] 
	I1028 18:36:46.695665   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:36:46.695717   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:36:46.695767   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:36:46.695781   67149 kubeadm.go:310] 
	I1028 18:36:46.695921   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:36:46.696037   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:36:46.696048   67149 kubeadm.go:310] 
	I1028 18:36:46.696177   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:36:46.696285   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:36:46.696390   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:36:46.696512   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:36:46.696560   67149 kubeadm.go:310] 
	I1028 18:36:46.696579   67149 kubeadm.go:394] duration metric: took 7m56.601380499s to StartCluster
	I1028 18:36:46.696618   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:36:46.696670   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:36:46.738714   67149 cri.go:89] found id: ""
	I1028 18:36:46.738741   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.738749   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:36:46.738757   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:36:46.738822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:36:46.772906   67149 cri.go:89] found id: ""
	I1028 18:36:46.772934   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.772944   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:36:46.772951   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:36:46.773028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:36:46.808785   67149 cri.go:89] found id: ""
	I1028 18:36:46.808809   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.808819   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:36:46.808827   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:36:46.808884   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:36:46.842977   67149 cri.go:89] found id: ""
	I1028 18:36:46.843007   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.843016   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:36:46.843022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:36:46.843095   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:36:46.878121   67149 cri.go:89] found id: ""
	I1028 18:36:46.878148   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.878159   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:36:46.878166   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:36:46.878231   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:36:46.911953   67149 cri.go:89] found id: ""
	I1028 18:36:46.911977   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.911984   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:36:46.911990   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:36:46.912054   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:36:46.944291   67149 cri.go:89] found id: ""
	I1028 18:36:46.944317   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.944324   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:36:46.944329   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:36:46.944379   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:36:46.976525   67149 cri.go:89] found id: ""
	I1028 18:36:46.976554   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.976564   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:36:46.976575   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:36:46.976588   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:36:47.026517   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:36:47.026544   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:36:47.041198   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:36:47.041231   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:36:47.115650   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:36:47.115681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:36:47.115695   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:36:47.218059   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:36:47.218093   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 18:36:47.257114   67149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:36:47.257182   67149 out.go:270] * 
	W1028 18:36:47.257240   67149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.257280   67149 out.go:270] * 
	W1028 18:36:47.258088   67149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:36:47.261521   67149 out.go:201] 
	W1028 18:36:47.262707   67149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.262742   67149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:36:47.262760   67149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:36:47.264073   67149 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.126123475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140969126101540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1c38318-4798-4314-a179-0f0948cf46da name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.126848383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c20265e7-4719-4a21-8757-792072a9f68b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.126901186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c20265e7-4719-4a21-8757-792072a9f68b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.127083417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c20265e7-4719-4a21-8757-792072a9f68b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.166113865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f799d490-74e0-46ea-ba68-c29d85791fc0 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.166265319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f799d490-74e0-46ea-ba68-c29d85791fc0 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.167369568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbb40e71-317b-4bcc-a64d-c782b2700b5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.167776167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140969167750568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbb40e71-317b-4bcc-a64d-c782b2700b5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.168376509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df1691ba-a61e-4deb-9a5b-6bb2a2d66664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.168429173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df1691ba-a61e-4deb-9a5b-6bb2a2d66664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.168618293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df1691ba-a61e-4deb-9a5b-6bb2a2d66664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.209885131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4101851b-e676-4a98-9c1a-c71e9f1a8500 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.209954343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4101851b-e676-4a98-9c1a-c71e9f1a8500 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.211237140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=873a7081-6f99-4fdd-affc-5121c56ee0a9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.211623558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140969211601327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=873a7081-6f99-4fdd-affc-5121c56ee0a9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.212529848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97fb1623-d42a-4190-b161-7da64b86f31d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.212602497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97fb1623-d42a-4190-b161-7da64b86f31d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.212797526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97fb1623-d42a-4190-b161-7da64b86f31d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.249473630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cc02e62-1d07-48e8-8057-64f99fb6e558 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.249543425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cc02e62-1d07-48e8-8057-64f99fb6e558 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.250372597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe7b6d06-3739-4e02-873e-d32fb5a9dadc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.250683042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140969250662768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe7b6d06-3739-4e02-873e-d32fb5a9dadc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.251248577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d31eca53-c1f2-440d-9b34-b9691d3718ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.251330169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d31eca53-c1f2-440d-9b34-b9691d3718ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:42:49 no-preload-051152 crio[707]: time="2024-10-28 18:42:49.251511297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d31eca53-c1f2-440d-9b34-b9691d3718ae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a7490abcdc75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ec6d303457c48       storage-provisioner
	df2a4392d30b3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   086ada1a631f5       coredns-7c65d6cfc9-sx5qg
	57afc8bfca048       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   9b8a560aaa473       coredns-7c65d6cfc9-mxhp2
	f0c7e3bac7dcc       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   38d15fad63b86       kube-proxy-28qht
	1df93a4fd5298       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   4f0c484a1a871       kube-scheduler-no-preload-051152
	5fad3c448c620       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   f9c0ed8466dbb       kube-controller-manager-no-preload-051152
	6d2229645331e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   459669cfa829b       etcd-no-preload-051152
	d68dd6a8f4e4d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   a3f4e0abdf259       kube-apiserver-no-preload-051152
	9e9a4b33605aa       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   7c8e1b62281f6       kube-apiserver-no-preload-051152
	
	
	==> coredns [57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-051152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-051152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=no-preload-051152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:33:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-051152
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:42:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:38:49 +0000   Mon, 28 Oct 2024 18:33:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:38:49 +0000   Mon, 28 Oct 2024 18:33:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:38:49 +0000   Mon, 28 Oct 2024 18:33:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:38:49 +0000   Mon, 28 Oct 2024 18:33:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.78
	  Hostname:    no-preload-051152
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d1abce9a1694ead8a0537b8e0e44c6e
	  System UUID:                9d1abce9-a169-4ead-8a05-37b8e0e44c6e
	  Boot ID:                    da7132f0-f8af-4057-9464-63b6b5bf9be7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-mxhp2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-sx5qg                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-no-preload-051152                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-no-preload-051152             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-no-preload-051152    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-28qht                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-no-preload-051152             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-9rh4q              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node no-preload-051152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node no-preload-051152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node no-preload-051152 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node no-preload-051152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node no-preload-051152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node no-preload-051152 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node no-preload-051152 event: Registered Node no-preload-051152 in Controller
	
	
	==> dmesg <==
	[  +0.040045] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.853028] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.435450] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.443590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.645745] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060364] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.200444] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.113118] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.272482] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.752223] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.058801] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.832003] systemd-fstab-generator[1422]: Ignoring "noauto" option for root device
	[  +4.057958] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.057504] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.266517] kauditd_printk_skb: 25 callbacks suppressed
	[Oct28 18:33] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.460467] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +4.548056] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.518414] systemd-fstab-generator[3445]: Ignoring "noauto" option for root device
	[  +5.340523] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.111189] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.550282] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402] <==
	{"level":"info","ts":"2024-10-28T18:33:27.886882Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.78:2380"}
	{"level":"info","ts":"2024-10-28T18:33:27.886983Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.78:2380"}
	{"level":"info","ts":"2024-10-28T18:33:27.887490Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T18:33:27.888966Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T18:33:27.889077Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9fc63996407e1dc3","initial-advertise-peer-urls":["https://192.168.61.78:2380"],"listen-peer-urls":["https://192.168.61.78:2380"],"advertise-client-urls":["https://192.168.61.78:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.78:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T18:33:28.194232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9fc63996407e1dc3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T18:33:28.194327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9fc63996407e1dc3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T18:33:28.194368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9fc63996407e1dc3 received MsgPreVoteResp from 9fc63996407e1dc3 at term 1"}
	{"level":"info","ts":"2024-10-28T18:33:28.194397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9fc63996407e1dc3 became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:28.194421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9fc63996407e1dc3 received MsgVoteResp from 9fc63996407e1dc3 at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:28.194447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9fc63996407e1dc3 became leader at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:28.194473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9fc63996407e1dc3 elected leader 9fc63996407e1dc3 at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:28.198388Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9fc63996407e1dc3","local-member-attributes":"{Name:no-preload-051152 ClientURLs:[https://192.168.61.78:2379]}","request-path":"/0/members/9fc63996407e1dc3/attributes","cluster-id":"96f5678f0acb0355","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T18:33:28.198461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:33:28.198851Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.199910Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:33:28.200827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T18:33:28.208247Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"96f5678f0acb0355","local-member-id":"9fc63996407e1dc3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.208360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.208397Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.208409Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:33:28.208916Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:33:28.209632Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.78:2379"}
	{"level":"info","ts":"2024-10-28T18:33:28.230003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:33:28.230075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:42:49 up 14 min,  0 users,  load average: 0.29, 0.22, 0.21
	Linux no-preload-051152 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab] <==
	W1028 18:33:19.667647       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.667749       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.684396       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.711116       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.730489       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.742195       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.753914       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.791593       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.879301       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.883731       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.963455       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.984302       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.987658       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:20.026963       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:20.117528       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:20.274561       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:23.125712       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:23.563563       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.298603       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.319531       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.427358       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.541580       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.567411       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.572744       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.726344       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 18:38:31.165318       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:38:31.165501       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 18:38:31.166727       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:38:31.166754       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:39:31.167093       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:39:31.167236       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 18:39:31.167322       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:39:31.167387       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:39:31.168385       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:39:31.168428       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:41:31.169294       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:41:31.169295       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:41:31.169715       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 18:41:31.169815       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:41:31.170975       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:41:31.171045       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb] <==
	E1028 18:37:37.194821       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:37:37.629030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:38:07.201637       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:38:07.636792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:38:37.207706       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:38:37.644971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:38:49.243368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-051152"
	E1028 18:39:07.218010       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:39:07.653200       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:39:37.224997       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:39:37.660672       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:39:42.611206       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="374.144µs"
	I1028 18:39:57.607956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="63.887µs"
	E1028 18:40:07.230313       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:40:07.668784       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:40:37.237925       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:40:37.677429       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:41:07.246626       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:41:07.685968       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:41:37.252965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:41:37.694290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:42:07.258943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:42:07.704450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:42:37.265655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:42:37.713114       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:33:38.942498       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:33:38.959672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.78"]
	E1028 18:33:38.959741       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:33:39.464391       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:33:39.464444       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:33:39.464498       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:33:39.642368       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:33:39.646457       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:33:39.648773       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:33:39.655132       1 config.go:199] "Starting service config controller"
	I1028 18:33:39.655326       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:33:39.655444       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:33:39.655533       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:33:39.666686       1 config.go:328] "Starting node config controller"
	I1028 18:33:39.666703       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:33:39.757660       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 18:33:39.757704       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:33:39.776754       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7] <==
	W1028 18:33:30.270624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:30.271781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 18:33:30.271832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 18:33:30.271882       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:33:30.271933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:33:30.271998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.098324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:31.098437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.222766       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 18:33:31.222949       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 18:33:31.262709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:31.262809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.359359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 18:33:31.359480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.378046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:33:31.378103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.399813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:31.399939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.480112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:33:31.480266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1028 18:33:33.049737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 18:41:35 no-preload-051152 kubelet[3452]: E1028 18:41:35.593820    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:41:42 no-preload-051152 kubelet[3452]: E1028 18:41:42.772840    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140902772632907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:41:42 no-preload-051152 kubelet[3452]: E1028 18:41:42.772884    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140902772632907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:41:47 no-preload-051152 kubelet[3452]: E1028 18:41:47.595762    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:41:52 no-preload-051152 kubelet[3452]: E1028 18:41:52.774411    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140912774009036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:41:52 no-preload-051152 kubelet[3452]: E1028 18:41:52.774458    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140912774009036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:00 no-preload-051152 kubelet[3452]: E1028 18:42:00.594352    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:42:02 no-preload-051152 kubelet[3452]: E1028 18:42:02.775774    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140922775565702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:02 no-preload-051152 kubelet[3452]: E1028 18:42:02.775821    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140922775565702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:12 no-preload-051152 kubelet[3452]: E1028 18:42:12.777973    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140932777573384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:12 no-preload-051152 kubelet[3452]: E1028 18:42:12.778026    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140932777573384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:15 no-preload-051152 kubelet[3452]: E1028 18:42:15.593712    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:42:22 no-preload-051152 kubelet[3452]: E1028 18:42:22.780118    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140942779637416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:22 no-preload-051152 kubelet[3452]: E1028 18:42:22.781519    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140942779637416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:29 no-preload-051152 kubelet[3452]: E1028 18:42:29.594812    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:42:32 no-preload-051152 kubelet[3452]: E1028 18:42:32.630346    3452 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 18:42:32 no-preload-051152 kubelet[3452]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 18:42:32 no-preload-051152 kubelet[3452]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 18:42:32 no-preload-051152 kubelet[3452]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 18:42:32 no-preload-051152 kubelet[3452]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 18:42:32 no-preload-051152 kubelet[3452]: E1028 18:42:32.783767    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140952783511538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:32 no-preload-051152 kubelet[3452]: E1028 18:42:32.783806    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140952783511538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:42 no-preload-051152 kubelet[3452]: E1028 18:42:42.595311    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:42:42 no-preload-051152 kubelet[3452]: E1028 18:42:42.785129    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140962784852098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:42 no-preload-051152 kubelet[3452]: E1028 18:42:42.785229    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140962784852098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f] <==
	I1028 18:33:39.882785       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 18:33:39.898522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 18:33:39.898584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 18:33:39.912227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 18:33:39.912408       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-051152_20aeb7d9-1f7f-478f-bfff-47100469eed1!
	I1028 18:33:39.915822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71223ce6-ec64-472b-bde7-65690fd6dd67", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-051152_20aeb7d9-1f7f-478f-bfff-47100469eed1 became leader
	I1028 18:33:40.013231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-051152_20aeb7d9-1f7f-478f-bfff-47100469eed1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-051152 -n no-preload-051152
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-051152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9rh4q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-051152 describe pod metrics-server-6867b74b74-9rh4q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-051152 describe pod metrics-server-6867b74b74-9rh4q: exit status 1 (62.624583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9rh4q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-051152 describe pod metrics-server-6867b74b74-9rh4q: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 18:43:16.274903778 +0000 UTC m=+5835.732948099
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-692033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-692033 logs -n 25: (1.926824013s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-703793                              | running-upgrade-703793       | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-021370            | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:25:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:25:35.146308   67489 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:25:35.146467   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146474   67489 out.go:358] Setting ErrFile to fd 2...
	I1028 18:25:35.146480   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146973   67489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:25:35.147825   67489 out.go:352] Setting JSON to false
	I1028 18:25:35.148718   67489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7678,"bootTime":1730132257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:25:35.148810   67489 start.go:139] virtualization: kvm guest
	I1028 18:25:35.150695   67489 out.go:177] * [default-k8s-diff-port-692033] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:25:35.151797   67489 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:25:35.151797   67489 notify.go:220] Checking for updates...
	I1028 18:25:35.154193   67489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:25:35.155491   67489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:25:35.156576   67489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:25:35.157619   67489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:25:35.158702   67489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:25:35.160202   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:25:35.160602   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.160658   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.175095   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I1028 18:25:35.175421   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.175848   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.175863   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.176187   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.176387   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.176667   67489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:25:35.177210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.177325   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.191270   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I1028 18:25:35.191687   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.192092   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.192114   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.192388   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.192551   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.222738   67489 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:25:35.223900   67489 start.go:297] selected driver: kvm2
	I1028 18:25:35.223910   67489 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.224018   67489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:25:35.224696   67489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.224770   67489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:25:35.238839   67489 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:25:35.239228   67489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:25:35.239258   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:25:35.239310   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:25:35.239360   67489 start.go:340] cluster config:
	{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.239480   67489 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.241175   67489 out.go:177] * Starting "default-k8s-diff-port-692033" primary control-plane node in "default-k8s-diff-port-692033" cluster
	I1028 18:25:37.248702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:35.242393   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:25:35.242423   67489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:25:35.242432   67489 cache.go:56] Caching tarball of preloaded images
	I1028 18:25:35.242504   67489 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:25:35.242517   67489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:25:35.242600   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:25:35.242763   67489 start.go:360] acquireMachinesLock for default-k8s-diff-port-692033: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:25:40.320712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:46.400713   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:49.472709   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:55.552712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:58.624703   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:04.704707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:07.776740   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:13.856735   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:16.928744   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:23.008721   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:26.080668   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:32.160706   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:35.232663   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:41.312774   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:44.384739   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:50.464729   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:53.536702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:59.616750   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:02.688719   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:08.768731   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:11.840771   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:17.920756   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:20.992753   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:27.072785   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:30.144726   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:36.224704   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:39.296825   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:45.376692   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:48.448699   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:54.528707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:57.600754   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:28:00.605468   66801 start.go:364] duration metric: took 4m12.368996576s to acquireMachinesLock for "no-preload-051152"
	I1028 18:28:00.605517   66801 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:00.605525   66801 fix.go:54] fixHost starting: 
	I1028 18:28:00.605815   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:00.605850   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:00.621828   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1028 18:28:00.622237   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:00.622654   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:28:00.622674   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:00.622975   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:00.623150   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:00.623272   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:28:00.624880   66801 fix.go:112] recreateIfNeeded on no-preload-051152: state=Stopped err=<nil>
	I1028 18:28:00.624910   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	W1028 18:28:00.625076   66801 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:00.627065   66801 out.go:177] * Restarting existing kvm2 VM for "no-preload-051152" ...
	I1028 18:28:00.603089   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:00.603122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603425   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:28:00.603450   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603663   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:28:00.605343   66600 machine.go:96] duration metric: took 4m37.432159141s to provisionDockerMachine
	I1028 18:28:00.605380   66600 fix.go:56] duration metric: took 4m37.452432846s for fixHost
	I1028 18:28:00.605387   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 4m37.452449736s
	W1028 18:28:00.605419   66600 start.go:714] error starting host: provision: host is not running
	W1028 18:28:00.605517   66600 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 18:28:00.605528   66600 start.go:729] Will try again in 5 seconds ...
	I1028 18:28:00.628172   66801 main.go:141] libmachine: (no-preload-051152) Calling .Start
	I1028 18:28:00.628308   66801 main.go:141] libmachine: (no-preload-051152) Ensuring networks are active...
	I1028 18:28:00.629123   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network default is active
	I1028 18:28:00.629467   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network mk-no-preload-051152 is active
	I1028 18:28:00.629782   66801 main.go:141] libmachine: (no-preload-051152) Getting domain xml...
	I1028 18:28:00.630687   66801 main.go:141] libmachine: (no-preload-051152) Creating domain...
	I1028 18:28:01.819872   66801 main.go:141] libmachine: (no-preload-051152) Waiting to get IP...
	I1028 18:28:01.820792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:01.821214   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:01.821287   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:01.821204   68016 retry.go:31] will retry after 269.081621ms: waiting for machine to come up
	I1028 18:28:02.091799   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.092220   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.092242   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.092175   68016 retry.go:31] will retry after 341.926163ms: waiting for machine to come up
	I1028 18:28:02.435679   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.436035   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.436067   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.435982   68016 retry.go:31] will retry after 355.739166ms: waiting for machine to come up
	I1028 18:28:02.793549   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.793928   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.793953   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.793881   68016 retry.go:31] will retry after 496.396184ms: waiting for machine to come up
	I1028 18:28:05.607678   66600 start.go:360] acquireMachinesLock for embed-certs-021370: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:28:03.291568   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.292038   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.292068   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.291978   68016 retry.go:31] will retry after 561.311245ms: waiting for machine to come up
	I1028 18:28:03.854782   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.855137   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.855166   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.855088   68016 retry.go:31] will retry after 574.675969ms: waiting for machine to come up
	I1028 18:28:04.431784   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:04.432226   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:04.432250   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:04.432177   68016 retry.go:31] will retry after 1.028136295s: waiting for machine to come up
	I1028 18:28:05.461477   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:05.461839   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:05.461869   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:05.461795   68016 retry.go:31] will retry after 955.343831ms: waiting for machine to come up
	I1028 18:28:06.418161   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:06.418629   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:06.418659   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:06.418576   68016 retry.go:31] will retry after 1.615930502s: waiting for machine to come up
	I1028 18:28:08.036275   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:08.036641   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:08.036662   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:08.036615   68016 retry.go:31] will retry after 2.111463198s: waiting for machine to come up
	I1028 18:28:10.150891   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:10.151403   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:10.151429   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:10.151351   68016 retry.go:31] will retry after 2.35232289s: waiting for machine to come up
	I1028 18:28:12.506070   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:12.506471   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:12.506494   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:12.506447   68016 retry.go:31] will retry after 2.874687772s: waiting for machine to come up
	I1028 18:28:15.384360   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:15.384680   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:15.384712   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:15.384636   68016 retry.go:31] will retry after 3.299950406s: waiting for machine to come up
	I1028 18:28:19.893083   67149 start.go:364] duration metric: took 3m43.747535803s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:28:19.893161   67149 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:19.893170   67149 fix.go:54] fixHost starting: 
	I1028 18:28:19.893556   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:19.893608   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:19.909857   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I1028 18:28:19.910215   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:19.910669   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:28:19.910690   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:19.911049   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:19.911241   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:19.911395   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:28:19.912825   67149 fix.go:112] recreateIfNeeded on old-k8s-version-223868: state=Stopped err=<nil>
	I1028 18:28:19.912856   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	W1028 18:28:19.912996   67149 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:19.915041   67149 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	I1028 18:28:19.916422   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .Start
	I1028 18:28:19.916611   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:28:19.917295   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:28:19.917560   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:28:19.917951   67149 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:28:19.918628   67149 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:28:18.688243   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.688710   66801 main.go:141] libmachine: (no-preload-051152) Found IP for machine: 192.168.61.78
	I1028 18:28:18.688738   66801 main.go:141] libmachine: (no-preload-051152) Reserving static IP address...
	I1028 18:28:18.688754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has current primary IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.689151   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.689174   66801 main.go:141] libmachine: (no-preload-051152) Reserved static IP address: 192.168.61.78
	I1028 18:28:18.689188   66801 main.go:141] libmachine: (no-preload-051152) DBG | skip adding static IP to network mk-no-preload-051152 - found existing host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"}
	I1028 18:28:18.689198   66801 main.go:141] libmachine: (no-preload-051152) Waiting for SSH to be available...
	I1028 18:28:18.689217   66801 main.go:141] libmachine: (no-preload-051152) DBG | Getting to WaitForSSH function...
	I1028 18:28:18.691372   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691721   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.691754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691861   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH client type: external
	I1028 18:28:18.691890   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa (-rw-------)
	I1028 18:28:18.691950   66801 main.go:141] libmachine: (no-preload-051152) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:18.691967   66801 main.go:141] libmachine: (no-preload-051152) DBG | About to run SSH command:
	I1028 18:28:18.691979   66801 main.go:141] libmachine: (no-preload-051152) DBG | exit 0
	I1028 18:28:18.816169   66801 main.go:141] libmachine: (no-preload-051152) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:18.816571   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetConfigRaw
	I1028 18:28:18.817209   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:18.819569   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.819891   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.819913   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.820164   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:28:18.820375   66801 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:18.820392   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:18.820618   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.822580   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.822953   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.822983   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.823096   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.823250   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823390   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823537   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.823687   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.823878   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.823890   66801 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:18.932489   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:18.932516   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.932769   66801 buildroot.go:166] provisioning hostname "no-preload-051152"
	I1028 18:28:18.932798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.933003   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.935565   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.935938   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.935965   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.936147   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.936346   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936513   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936674   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.936838   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.936994   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.937006   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-051152 && echo "no-preload-051152" | sudo tee /etc/hostname
	I1028 18:28:19.057840   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-051152
	
	I1028 18:28:19.057872   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.060536   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.060917   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.060946   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.061068   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.061237   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061405   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061544   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.061700   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.061848   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.061863   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-051152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-051152/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-051152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:19.180890   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:19.180920   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:19.180957   66801 buildroot.go:174] setting up certificates
	I1028 18:28:19.180971   66801 provision.go:84] configureAuth start
	I1028 18:28:19.180985   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:19.181299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.183792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184144   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.184172   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184309   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.186298   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186588   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.186616   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186722   66801 provision.go:143] copyHostCerts
	I1028 18:28:19.186790   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:19.186804   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:19.186868   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:19.186974   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:19.186986   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:19.187023   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:19.187107   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:19.187115   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:19.187146   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:19.187197   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.no-preload-051152 san=[127.0.0.1 192.168.61.78 localhost minikube no-preload-051152]
	I1028 18:28:19.275109   66801 provision.go:177] copyRemoteCerts
	I1028 18:28:19.275175   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:19.275200   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.278392   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.278946   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.278978   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.279183   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.279454   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.279651   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.279789   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.362094   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:19.384635   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:28:19.406649   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:19.428807   66801 provision.go:87] duration metric: took 247.825267ms to configureAuth
	I1028 18:28:19.428830   66801 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:19.429026   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:28:19.429090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.431615   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.431928   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.431954   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.432090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.432278   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432434   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432602   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.432786   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.432932   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.432946   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:19.655137   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:19.655163   66801 machine.go:96] duration metric: took 834.775161ms to provisionDockerMachine
	I1028 18:28:19.655175   66801 start.go:293] postStartSetup for "no-preload-051152" (driver="kvm2")
	I1028 18:28:19.655185   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:19.655199   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.655509   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:19.655532   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.658099   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658411   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.658442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658566   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.658744   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.658884   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.659013   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.743030   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:19.746986   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:19.747007   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:19.747081   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:19.747177   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:19.747290   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:19.756378   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:19.779243   66801 start.go:296] duration metric: took 124.056855ms for postStartSetup
	I1028 18:28:19.779283   66801 fix.go:56] duration metric: took 19.173756385s for fixHost
	I1028 18:28:19.779305   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.781887   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782205   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.782226   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782367   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.782557   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782709   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782836   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.782999   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.783180   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.783191   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:19.892920   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140099.866892804
	
	I1028 18:28:19.892944   66801 fix.go:216] guest clock: 1730140099.866892804
	I1028 18:28:19.892954   66801 fix.go:229] Guest: 2024-10-28 18:28:19.866892804 +0000 UTC Remote: 2024-10-28 18:28:19.779287594 +0000 UTC m=+271.674302547 (delta=87.60521ms)
	I1028 18:28:19.892997   66801 fix.go:200] guest clock delta is within tolerance: 87.60521ms
	I1028 18:28:19.893008   66801 start.go:83] releasing machines lock for "no-preload-051152", held for 19.287505767s
	I1028 18:28:19.893034   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.893299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.895775   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896177   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.896204   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896362   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.896826   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897023   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897133   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:19.897171   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.897267   66801 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:19.897291   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.899703   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.899995   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900031   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900054   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900208   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900374   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900416   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900550   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.900626   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900707   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.900818   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900944   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.901098   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.982201   66801 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:20.008913   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:20.157816   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:20.165773   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:20.165837   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:20.187342   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:20.187359   66801 start.go:495] detecting cgroup driver to use...
	I1028 18:28:20.187423   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:20.204825   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:20.220702   66801 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:20.220776   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:20.238812   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:20.253664   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:20.363567   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:20.534475   66801 docker.go:233] disabling docker service ...
	I1028 18:28:20.534564   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:20.548424   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:20.564292   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:20.687135   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:20.796225   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:20.810327   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:20.828804   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:28:20.828866   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.838719   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:20.838768   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.849166   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.862811   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.875223   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:20.885402   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.895602   66801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.914163   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.924194   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:20.934907   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:20.934958   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:20.948898   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:20.958955   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:21.069438   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:21.175294   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:21.175379   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:21.179886   66801 start.go:563] Will wait 60s for crictl version
	I1028 18:28:21.179942   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.184195   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:21.226939   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:21.227043   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.254702   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.284607   66801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:28:21.285906   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:21.288560   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.288918   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:21.288945   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.289132   66801 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:21.293108   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:21.307303   66801 kubeadm.go:883] updating cluster {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:21.307447   66801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:28:21.307495   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:21.347493   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:28:21.347520   66801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:21.347595   66801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.347609   66801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.347621   66801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.347656   66801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.347690   66801 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 18:28:21.347691   66801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.347758   66801 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.347695   66801 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349312   66801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.349387   66801 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 18:28:21.349402   66801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.349526   66801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.349574   66801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.349582   66801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.349632   66801 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349311   66801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.515246   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.515760   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.543817   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 18:28:21.551755   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.562433   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.594208   66801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 18:28:21.594257   66801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.594291   66801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 18:28:21.594317   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.594323   66801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.594364   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.666046   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.666654   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.757831   66801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 18:28:21.757867   66801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.757867   66801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 18:28:21.757894   66801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.757914   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757926   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.757937   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757982   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.758142   66801 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 18:28:21.758161   66801 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 18:28:21.758197   66801 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.758169   66801 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.758234   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.758270   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.813746   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.813792   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.813836   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.813837   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.813840   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.813890   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.934434   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.958229   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.958287   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.958377   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.958381   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.958467   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.053179   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 18:28:22.053304   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.053351   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 18:28:22.053447   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:22.087756   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:22.087762   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:22.087826   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:22.087867   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.087897   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 18:28:22.087907   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087938   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087942   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 18:28:22.161136   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 18:28:22.161259   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:22.201924   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 18:28:22.201967   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 18:28:22.202032   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:22.202068   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:21.207941   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:28:21.209066   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.209518   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.209604   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.209495   68155 retry.go:31] will retry after 258.02952ms: waiting for machine to come up
	I1028 18:28:21.468599   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.469034   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.469052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.468996   68155 retry.go:31] will retry after 389.053264ms: waiting for machine to come up
	I1028 18:28:21.859493   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.859987   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.860017   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.859929   68155 retry.go:31] will retry after 454.438888ms: waiting for machine to come up
	I1028 18:28:22.315484   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.315961   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.315988   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.315904   68155 retry.go:31] will retry after 531.549561ms: waiting for machine to come up
	I1028 18:28:22.849247   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.849736   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.849791   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.849693   68155 retry.go:31] will retry after 602.202649ms: waiting for machine to come up
	I1028 18:28:23.453311   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:23.453859   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:23.453887   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:23.453796   68155 retry.go:31] will retry after 836.622626ms: waiting for machine to come up
	I1028 18:28:24.291959   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:24.292286   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:24.292315   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:24.292252   68155 retry.go:31] will retry after 1.187276744s: waiting for machine to come up
	I1028 18:28:25.480962   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:25.481398   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:25.481417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:25.481350   68155 retry.go:31] will retry after 1.417127806s: waiting for machine to come up
	I1028 18:28:23.586400   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.127903   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3: (2.040063682s)
	I1028 18:28:24.127962   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 18:28:24.127967   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (1.966690859s)
	I1028 18:28:24.127991   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 18:28:24.128010   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.925953727s)
	I1028 18:28:24.128034   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.925947261s)
	I1028 18:28:24.128041   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 18:28:24.128048   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 18:28:24.127904   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.03994028s)
	I1028 18:28:24.128069   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:24.128085   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 18:28:24.128109   66801 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 18:28:24.128123   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.128138   66801 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.128166   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:24.128180   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.132734   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 18:28:26.097200   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.9689964s)
	I1028 18:28:26.097240   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 18:28:26.097241   66801 ssh_runner.go:235] Completed: which crictl: (1.969052863s)
	I1028 18:28:26.097264   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.097308   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:26.097309   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.900944   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:26.901481   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:26.901511   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:26.901426   68155 retry.go:31] will retry after 1.766762252s: waiting for machine to come up
	I1028 18:28:28.670334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:28.670798   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:28.670827   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:28.670742   68155 retry.go:31] will retry after 2.287152926s: waiting for machine to come up
	I1028 18:28:30.959639   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:30.959947   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:30.959963   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:30.959917   68155 retry.go:31] will retry after 1.799223833s: waiting for machine to come up
	I1028 18:28:28.165293   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.067952153s)
	I1028 18:28:28.165410   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:28.165497   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.068111312s)
	I1028 18:28:28.165523   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 18:28:28.165548   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.165591   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.208189   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:30.152411   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.986796263s)
	I1028 18:28:30.152458   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 18:28:30.152496   66801 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152504   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.944281988s)
	I1028 18:28:30.152550   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152556   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 18:28:30.152652   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:32.761498   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:32.761941   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:32.761968   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:32.761894   68155 retry.go:31] will retry after 2.231065891s: waiting for machine to come up
	I1028 18:28:34.994438   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:34.994902   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:34.994936   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:34.994847   68155 retry.go:31] will retry after 4.079794439s: waiting for machine to come up
	I1028 18:28:33.842059   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.689484833s)
	I1028 18:28:33.842109   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 18:28:33.842138   66801 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:33.842155   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.68947822s)
	I1028 18:28:33.842184   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 18:28:33.842206   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:35.714458   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.872222439s)
	I1028 18:28:35.714493   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 18:28:35.714521   66801 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:35.714567   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:36.568124   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 18:28:36.568177   66801 cache_images.go:123] Successfully loaded all cached images
	I1028 18:28:36.568185   66801 cache_images.go:92] duration metric: took 15.220649269s to LoadCachedImages
	I1028 18:28:36.568199   66801 kubeadm.go:934] updating node { 192.168.61.78 8443 v1.31.2 crio true true} ...
	I1028 18:28:36.568310   66801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-051152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:36.568383   66801 ssh_runner.go:195] Run: crio config
	I1028 18:28:36.613400   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:36.613425   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:36.613435   66801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:36.613454   66801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.78 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-051152 NodeName:no-preload-051152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:28:36.613596   66801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-051152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:36.613669   66801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:28:36.624493   66801 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:36.624553   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:36.633828   66801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 18:28:36.649661   66801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:36.665454   66801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1028 18:28:36.681280   66801 ssh_runner.go:195] Run: grep 192.168.61.78	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:36.685010   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:36.697177   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:36.823266   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:36.840346   66801 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152 for IP: 192.168.61.78
	I1028 18:28:36.840366   66801 certs.go:194] generating shared ca certs ...
	I1028 18:28:36.840380   66801 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:36.840538   66801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:36.840578   66801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:36.840586   66801 certs.go:256] generating profile certs ...
	I1028 18:28:36.840661   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.key
	I1028 18:28:36.840722   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key.262d982c
	I1028 18:28:36.840758   66801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key
	I1028 18:28:36.840859   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:36.840892   66801 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:36.840902   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:36.840922   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:36.840943   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:36.840971   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:36.841025   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:36.841818   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:36.881548   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:36.907084   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:36.947810   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:36.976268   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 18:28:37.003795   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:28:37.036252   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:37.059731   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:28:37.083467   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:37.106397   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:37.128719   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:37.151133   66801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:37.166917   66801 ssh_runner.go:195] Run: openssl version
	I1028 18:28:37.172387   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:37.182117   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186329   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186389   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.191925   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:37.201799   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:37.211620   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215889   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215923   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.221588   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:37.231983   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:37.242291   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246869   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246904   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.252408   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:37.262946   66801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:37.267334   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:37.273164   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:37.278831   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:37.284778   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:37.290547   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:37.296195   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:37.301915   66801 kubeadm.go:392] StartCluster: {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:37.301986   66801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:37.302037   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.345115   66801 cri.go:89] found id: ""
	I1028 18:28:37.345185   66801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:37.355312   66801 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:37.355328   66801 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:37.355370   66801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:37.364777   66801 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:37.366056   66801 kubeconfig.go:125] found "no-preload-051152" server: "https://192.168.61.78:8443"
	I1028 18:28:37.368829   66801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:37.378010   66801 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.78
	I1028 18:28:37.378039   66801 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:37.378047   66801 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:37.378083   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.413442   66801 cri.go:89] found id: ""
	I1028 18:28:37.413522   66801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:37.428998   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:37.438365   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:37.438391   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:37.438442   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:37.447260   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:37.447310   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:37.456615   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:37.465292   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:37.465351   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:37.474382   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.482957   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:37.483012   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.491991   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:37.500635   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:37.500709   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:37.509632   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:37.518808   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:37.642796   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:40.421350   67489 start.go:364] duration metric: took 3m5.178550845s to acquireMachinesLock for "default-k8s-diff-port-692033"
	I1028 18:28:40.421416   67489 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:40.421430   67489 fix.go:54] fixHost starting: 
	I1028 18:28:40.421843   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:40.421894   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:40.439583   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I1028 18:28:40.440133   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:40.440679   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:28:40.440701   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:40.441025   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:40.441198   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:40.441359   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:28:40.443029   67489 fix.go:112] recreateIfNeeded on default-k8s-diff-port-692033: state=Stopped err=<nil>
	I1028 18:28:40.443055   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	W1028 18:28:40.443202   67489 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:40.445489   67489 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-692033" ...
	I1028 18:28:39.079052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079556   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079584   67149 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:28:39.079593   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:28:39.079888   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.079919   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | skip adding static IP to network mk-old-k8s-version-223868 - found existing host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"}
	I1028 18:28:39.079935   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:28:39.079955   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:28:39.079971   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:28:39.082041   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.082354   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082480   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:28:39.082500   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:28:39.082528   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:39.082555   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:28:39.082567   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:28:39.204523   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:39.204883   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:28:39.205526   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.208073   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208434   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.208478   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208709   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:28:39.208907   67149 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:39.208926   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:39.209144   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.211109   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211407   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.211437   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.211739   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.211888   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.212033   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.212218   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.212388   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.212398   67149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:39.316528   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:39.316566   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.316813   67149 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:28:39.316841   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.317028   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.319389   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319687   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.319713   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319836   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.320017   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320167   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320310   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.320458   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.320642   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.320656   67149 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:28:39.439149   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:28:39.439179   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.441957   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442268   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.442300   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442528   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.442736   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.442940   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.443122   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.443304   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.443525   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.443550   67149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:39.561619   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:39.561651   67149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:39.561702   67149 buildroot.go:174] setting up certificates
	I1028 18:28:39.561716   67149 provision.go:84] configureAuth start
	I1028 18:28:39.561731   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.562015   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.564838   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565195   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.565229   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565373   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.567875   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568262   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.568287   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568452   67149 provision.go:143] copyHostCerts
	I1028 18:28:39.568534   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:39.568553   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:39.568621   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:39.568745   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:39.568768   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:39.568810   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:39.568899   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:39.568911   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:39.568937   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:39.569006   67149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:28:39.786398   67149 provision.go:177] copyRemoteCerts
	I1028 18:28:39.786449   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:39.786482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.789048   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789373   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.789417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789535   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.789733   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.789884   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.790013   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:39.871816   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:39.902889   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:28:39.932633   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:39.958581   67149 provision.go:87] duration metric: took 396.851161ms to configureAuth
	I1028 18:28:39.958609   67149 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:39.958796   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:28:39.958881   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.961667   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962019   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.962044   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962240   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.962468   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962671   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962850   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.963037   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.963220   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.963239   67149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:40.179808   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:40.179843   67149 machine.go:96] duration metric: took 970.91659ms to provisionDockerMachine
	I1028 18:28:40.179857   67149 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:28:40.179869   67149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:40.179917   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.180287   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:40.180319   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.183011   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183383   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.183411   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183578   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.183770   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.183964   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.184114   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.270445   67149 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:40.275798   67149 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:40.275825   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:40.275898   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:40.275995   67149 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:40.276108   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:40.287529   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:40.310860   67149 start.go:296] duration metric: took 130.989944ms for postStartSetup
	I1028 18:28:40.310899   67149 fix.go:56] duration metric: took 20.417730265s for fixHost
	I1028 18:28:40.310925   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.313613   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.313967   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.314000   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.314175   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.314354   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314518   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314692   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.314862   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:40.315021   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:40.315032   67149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:40.421204   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140120.384024791
	
	I1028 18:28:40.421225   67149 fix.go:216] guest clock: 1730140120.384024791
	I1028 18:28:40.421235   67149 fix.go:229] Guest: 2024-10-28 18:28:40.384024791 +0000 UTC Remote: 2024-10-28 18:28:40.310903937 +0000 UTC m=+244.300202669 (delta=73.120854ms)
	I1028 18:28:40.421259   67149 fix.go:200] guest clock delta is within tolerance: 73.120854ms
	I1028 18:28:40.421265   67149 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 20.528130845s
	I1028 18:28:40.421297   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.421574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:40.424700   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425088   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.425116   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425286   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.425971   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426188   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426266   67149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:40.426340   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.426604   67149 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:40.426632   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.429408   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429569   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429807   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.429841   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429950   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430059   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.430092   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.430123   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430236   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430383   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430459   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430616   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.430614   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.509203   67149 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:40.540019   67149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:40.701732   67149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:40.710264   67149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:40.710354   67149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:40.731373   67149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:40.731398   67149 start.go:495] detecting cgroup driver to use...
	I1028 18:28:40.731465   67149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:40.751312   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:40.766288   67149 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:40.766399   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:40.783995   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:40.800295   67149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:40.940688   67149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:41.101493   67149 docker.go:233] disabling docker service ...
	I1028 18:28:41.101562   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:41.123350   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:41.141744   67149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:41.279020   67149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:41.414748   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:41.429469   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:41.448611   67149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:28:41.448669   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.460766   67149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:41.460842   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.473021   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.485888   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.497498   67149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:41.509250   67149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:41.519701   67149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:41.519754   67149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:41.534596   67149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:41.544814   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:41.681203   67149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:41.786879   67149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:41.786957   67149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:41.791981   67149 start.go:563] Will wait 60s for crictl version
	I1028 18:28:41.792041   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:41.796034   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:41.839867   67149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:41.839958   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.873029   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.904534   67149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:28:38.508232   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.720400   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.784720   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.892007   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:38.892083   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.392953   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.892228   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.912702   66801 api_server.go:72] duration metric: took 1.020696043s to wait for apiserver process to appear ...
	I1028 18:28:39.912728   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:28:39.912749   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:39.913221   66801 api_server.go:269] stopped: https://192.168.61.78:8443/healthz: Get "https://192.168.61.78:8443/healthz": dial tcp 192.168.61.78:8443: connect: connection refused
	I1028 18:28:40.413025   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:40.446984   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Start
	I1028 18:28:40.447191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring networks are active...
	I1028 18:28:40.447998   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network default is active
	I1028 18:28:40.448350   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network mk-default-k8s-diff-port-692033 is active
	I1028 18:28:40.448884   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Getting domain xml...
	I1028 18:28:40.449664   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Creating domain...
	I1028 18:28:41.740010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting to get IP...
	I1028 18:28:41.740827   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741273   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:41.741192   68341 retry.go:31] will retry after 276.06097ms: waiting for machine to come up
	I1028 18:28:42.018700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019135   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019159   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.019089   68341 retry.go:31] will retry after 318.252876ms: waiting for machine to come up
	I1028 18:28:42.338630   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339287   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339312   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.339205   68341 retry.go:31] will retry after 428.196122ms: waiting for machine to come up
	I1028 18:28:42.768656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769225   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769248   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.769134   68341 retry.go:31] will retry after 483.256928ms: waiting for machine to come up
	I1028 18:28:43.253739   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254304   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254353   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.254220   68341 retry.go:31] will retry after 577.932805ms: waiting for machine to come up
	I1028 18:28:43.834355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.834976   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.835021   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.834945   68341 retry.go:31] will retry after 639.531065ms: waiting for machine to come up
	I1028 18:28:44.475727   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476299   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476331   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:44.476248   68341 retry.go:31] will retry after 1.171398436s: waiting for machine to come up
	I1028 18:28:43.473059   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.473096   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.473113   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.588338   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.588371   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.913612   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.918557   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:43.918598   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.412902   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.425930   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.425971   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.913482   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.926092   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.926126   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:45.413673   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:45.419384   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:28:45.430384   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:28:45.430431   66801 api_server.go:131] duration metric: took 5.517694037s to wait for apiserver health ...
	I1028 18:28:45.430442   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:45.430450   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:45.432587   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:28:41.906005   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:41.909278   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909683   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:41.909741   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909996   67149 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:41.915405   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:41.931747   67149 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:41.931886   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:28:41.931944   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:41.987909   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:41.987966   67149 ssh_runner.go:195] Run: which lz4
	I1028 18:28:41.993527   67149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:28:41.998982   67149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:28:41.999014   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:28:43.643480   67149 crio.go:462] duration metric: took 1.649982959s to copy over tarball
	I1028 18:28:43.643559   67149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:28:45.433946   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:28:45.453114   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:28:45.479255   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:28:45.497020   66801 system_pods.go:59] 8 kube-system pods found
	I1028 18:28:45.497072   66801 system_pods.go:61] "coredns-7c65d6cfc9-74b6t" [b6a550da-7c40-4283-b49e-1ab29e652037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:28:45.497084   66801 system_pods.go:61] "etcd-no-preload-051152" [d5b31ded-95ce-4dde-ba88-e653dfdb8d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:28:45.497097   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [95d0acb0-4d58-4307-9f4f-10f920ff4745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:28:45.497105   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [722530e1-1d76-40dc-8a24-fe79d0167835] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:28:45.497112   66801 system_pods.go:61] "kube-proxy-kg42f" [7891354b-a501-45c4-b15c-cf6d29e3721f] Running
	I1028 18:28:45.497121   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [c658808c-79c2-4b8e-b72c-0b2d8e058ab4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:28:45.497130   66801 system_pods.go:61] "metrics-server-6867b74b74-vgd8k" [626b71a2-6904-409f-9274-6963a94e6ac2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:28:45.497137   66801 system_pods.go:61] "storage-provisioner" [39bf84c9-9c6f-4048-8a11-460fb12f622b] Running
	I1028 18:28:45.497146   66801 system_pods.go:74] duration metric: took 17.863894ms to wait for pod list to return data ...
	I1028 18:28:45.497160   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:28:45.501945   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:28:45.501977   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:28:45.501993   66801 node_conditions.go:105] duration metric: took 4.827279ms to run NodePressure ...
	I1028 18:28:45.502014   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:45.835429   66801 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840823   66801 kubeadm.go:739] kubelet initialised
	I1028 18:28:45.840852   66801 kubeadm.go:740] duration metric: took 5.391212ms waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840862   66801 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:28:45.846565   66801 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:45.648994   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649559   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649587   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:45.649512   68341 retry.go:31] will retry after 1.258585317s: waiting for machine to come up
	I1028 18:28:46.909541   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909955   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:46.909911   68341 retry.go:31] will retry after 1.827150306s: waiting for machine to come up
	I1028 18:28:48.738193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738696   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738725   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:48.738653   68341 retry.go:31] will retry after 1.738249889s: waiting for machine to come up
	I1028 18:28:46.758767   67149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115173801s)
	I1028 18:28:46.758810   67149 crio.go:469] duration metric: took 3.115300284s to extract the tarball
	I1028 18:28:46.758821   67149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:28:46.816906   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:46.864347   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:46.864376   67149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:46.864499   67149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.864564   67149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.864623   67149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.864639   67149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.864674   67149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.864686   67149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:28:46.864710   67149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.864529   67149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:46.866383   67149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.866445   67149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.866493   67149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.866579   67149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.866795   67149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.867073   67149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.867095   67149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:28:46.867488   67149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.043358   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.053844   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.055684   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.056812   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.066211   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.090931   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.104900   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:28:47.141214   67149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:28:47.141260   67149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.141307   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202804   67149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:28:47.202863   67149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.202873   67149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:28:47.202903   67149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.202915   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202944   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.234811   67149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:28:47.234853   67149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.234900   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.236717   67149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:28:47.236751   67149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.236798   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.243872   67149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:28:47.243918   67149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.243971   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260210   67149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:28:47.260253   67149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:28:47.260256   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.260293   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260398   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.260438   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.260456   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.260517   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.260559   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413617   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.413776   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.413804   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413825   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.414063   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.414103   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.414150   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.544933   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.581577   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.582079   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.582161   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.582206   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.582344   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.582819   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.662237   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:28:47.736212   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.739757   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:28:47.739928   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:28:47.739802   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:28:47.739812   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:28:47.739841   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:28:47.783578   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:28:49.121698   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:49.266583   67149 cache_images.go:92] duration metric: took 2.402188013s to LoadCachedImages
	W1028 18:28:49.266686   67149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 18:28:49.266702   67149 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:28:49.266828   67149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:49.266918   67149 ssh_runner.go:195] Run: crio config
	I1028 18:28:49.318146   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:28:49.318167   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:49.318176   67149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:49.318193   67149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:28:49.318310   67149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:49.318371   67149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:28:49.329249   67149 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:49.329339   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:49.339379   67149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:28:49.359216   67149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:49.378289   67149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:28:49.397766   67149 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:49.401788   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:49.418204   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:49.558031   67149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:49.575443   67149 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:28:49.575469   67149 certs.go:194] generating shared ca certs ...
	I1028 18:28:49.575489   67149 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:49.575693   67149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:49.575746   67149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:49.575756   67149 certs.go:256] generating profile certs ...
	I1028 18:28:49.575859   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:28:49.575914   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:28:49.575951   67149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:28:49.576058   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:49.576092   67149 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:49.576103   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:49.576131   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:49.576162   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:49.576186   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:49.576238   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:49.576994   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:49.622814   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:49.653690   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:49.678975   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:49.707340   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:28:49.744836   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:28:49.776367   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:49.818999   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:28:49.847531   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:49.871924   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:49.897751   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:49.923267   67149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:49.939805   67149 ssh_runner.go:195] Run: openssl version
	I1028 18:28:49.945611   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:49.956191   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960862   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960916   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.966701   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:49.977882   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:49.990873   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995751   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995810   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:50.001891   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:50.013508   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:50.028132   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034144   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034217   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.041768   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:50.054079   67149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:50.058983   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:50.064802   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:50.070790   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:50.077090   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:50.083149   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:50.089232   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:50.095205   67149 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:50.095338   67149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:50.095411   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.139777   67149 cri.go:89] found id: ""
	I1028 18:28:50.139854   67149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:50.151967   67149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:50.151986   67149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:50.152040   67149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:50.163454   67149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:50.164876   67149 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:28:50.165798   67149 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-223868" cluster setting kubeconfig missing "old-k8s-version-223868" context setting]
	I1028 18:28:50.167121   67149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:50.169545   67149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:50.179447   67149 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.194
	I1028 18:28:50.179477   67149 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:50.179490   67149 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:50.179542   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.213891   67149 cri.go:89] found id: ""
	I1028 18:28:50.213963   67149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:50.231491   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:50.241752   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:50.241775   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:50.241829   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:50.252015   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:50.252075   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:50.263032   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:50.273500   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:50.273564   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:50.283603   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.293521   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:50.293567   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.303701   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:50.316202   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:50.316269   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:50.327841   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:50.341366   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:50.469586   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:49.414188   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:51.855115   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:50.478658   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479208   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479237   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:50.479151   68341 retry.go:31] will retry after 2.362711935s: waiting for machine to come up
	I1028 18:28:52.842907   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843290   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843314   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:52.843250   68341 retry.go:31] will retry after 2.561710525s: waiting for machine to come up
	I1028 18:28:51.507608   67149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037983659s)
	I1028 18:28:51.507645   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.733141   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.842228   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.947336   67149 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:51.947430   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.447618   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.947814   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.448476   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.947571   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.448371   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.947700   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.447735   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.948435   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.857886   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:54.862972   66801 pod_ready.go:93] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:54.863005   66801 pod_ready.go:82] duration metric: took 9.016413449s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:54.863019   66801 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869043   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:55.869076   66801 pod_ready.go:82] duration metric: took 1.006049217s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869091   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874842   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.874865   66801 pod_ready.go:82] duration metric: took 2.005766936s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874875   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878913   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.878930   66801 pod_ready.go:82] duration metric: took 4.049698ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878937   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889897   66801 pod_ready.go:93] pod "kube-proxy-kg42f" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.889913   66801 pod_ready.go:82] duration metric: took 10.971269ms for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889921   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.407934   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408336   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408362   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:55.408274   68341 retry.go:31] will retry after 3.762790995s: waiting for machine to come up
	I1028 18:28:59.173489   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173900   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Found IP for machine: 192.168.39.215
	I1028 18:28:59.173923   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has current primary IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173929   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserving static IP address...
	I1028 18:28:59.174320   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.174343   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | skip adding static IP to network mk-default-k8s-diff-port-692033 - found existing host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"}
	I1028 18:28:59.174355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserved static IP address: 192.168.39.215
	I1028 18:28:59.174365   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for SSH to be available...
	I1028 18:28:59.174376   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Getting to WaitForSSH function...
	I1028 18:28:59.176441   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176755   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.176786   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176913   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH client type: external
	I1028 18:28:59.176936   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa (-rw-------)
	I1028 18:28:59.176958   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:59.176970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | About to run SSH command:
	I1028 18:28:59.176982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | exit 0
	I1028 18:28:59.300272   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:59.300649   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetConfigRaw
	I1028 18:28:59.301261   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.303505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.303832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.303857   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.304080   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:28:59.304287   67489 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:59.304310   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:59.304535   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.306713   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307008   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.307042   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307187   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.307348   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307627   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.307768   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.307936   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.307946   67489 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:59.412710   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:59.412743   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413009   67489 buildroot.go:166] provisioning hostname "default-k8s-diff-port-692033"
	I1028 18:28:59.413041   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.415772   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416048   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.416070   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416251   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.416437   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416728   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.416847   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.417030   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.417041   67489 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-692033 && echo "default-k8s-diff-port-692033" | sudo tee /etc/hostname
	I1028 18:28:59.538491   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-692033
	
	I1028 18:28:59.538518   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.540842   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541144   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.541173   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.541527   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541684   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541815   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.541964   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.542123   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.542138   67489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-692033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-692033/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-692033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:59.657448   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:59.657480   67489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:59.657524   67489 buildroot.go:174] setting up certificates
	I1028 18:28:59.657539   67489 provision.go:84] configureAuth start
	I1028 18:28:59.657556   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.657832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.660465   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660797   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.660840   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660949   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.663393   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663801   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.663830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663977   67489 provision.go:143] copyHostCerts
	I1028 18:28:59.664049   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:59.664062   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:59.664117   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:59.664217   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:59.664228   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:59.664250   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:59.664300   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:59.664308   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:59.664327   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:59.664403   67489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-692033 san=[127.0.0.1 192.168.39.215 default-k8s-diff-port-692033 localhost minikube]
	I1028 18:28:59.882619   67489 provision.go:177] copyRemoteCerts
	I1028 18:28:59.882672   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:59.882695   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.885303   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.885686   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885927   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.886121   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.886278   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.886382   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:28:59.975231   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:00.000412   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 18:29:00.024424   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:29:00.048646   67489 provision.go:87] duration metric: took 391.090444ms to configureAuth
	I1028 18:29:00.048674   67489 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:00.048884   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:00.048970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.051793   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052156   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.052185   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.052532   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052729   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052894   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.053080   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.053241   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.053254   67489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:00.525285   66600 start.go:364] duration metric: took 54.917560334s to acquireMachinesLock for "embed-certs-021370"
	I1028 18:29:00.525349   66600 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:29:00.525359   66600 fix.go:54] fixHost starting: 
	I1028 18:29:00.525740   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:29:00.525778   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:29:00.544614   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I1028 18:29:00.544976   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:29:00.545433   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:29:00.545455   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:29:00.545842   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:29:00.546046   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:00.546230   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:29:00.547770   66600 fix.go:112] recreateIfNeeded on embed-certs-021370: state=Stopped err=<nil>
	I1028 18:29:00.547794   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	W1028 18:29:00.547957   66600 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:29:00.549753   66600 out.go:177] * Restarting existing kvm2 VM for "embed-certs-021370" ...
	I1028 18:28:56.447531   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.947711   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.447782   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.947642   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.948256   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.447558   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.948018   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.448186   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.947565   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.280618   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:00.280641   67489 machine.go:96] duration metric: took 976.341252ms to provisionDockerMachine
	I1028 18:29:00.280653   67489 start.go:293] postStartSetup for "default-k8s-diff-port-692033" (driver="kvm2")
	I1028 18:29:00.280669   67489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:00.280690   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.281004   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:00.281044   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.283656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.283977   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.284010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.284170   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.284382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.284549   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.284692   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.372947   67489 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:00.377456   67489 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:00.377480   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:00.377547   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:00.377646   67489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:00.377762   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:00.388767   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:00.413520   67489 start.go:296] duration metric: took 132.852709ms for postStartSetup
	I1028 18:29:00.413557   67489 fix.go:56] duration metric: took 19.992127182s for fixHost
	I1028 18:29:00.413578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.416040   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416377   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.416405   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416553   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.416756   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.416930   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.417065   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.417228   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.417412   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.417424   67489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:00.525082   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140140.492840769
	
	I1028 18:29:00.525105   67489 fix.go:216] guest clock: 1730140140.492840769
	I1028 18:29:00.525114   67489 fix.go:229] Guest: 2024-10-28 18:29:00.492840769 +0000 UTC Remote: 2024-10-28 18:29:00.413561948 +0000 UTC m=+205.301669628 (delta=79.278821ms)
	I1028 18:29:00.525169   67489 fix.go:200] guest clock delta is within tolerance: 79.278821ms
	I1028 18:29:00.525180   67489 start.go:83] releasing machines lock for "default-k8s-diff-port-692033", held for 20.103791447s
	I1028 18:29:00.525214   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.525495   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:00.528023   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528385   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.528415   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529038   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529287   67489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:00.529323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.529380   67489 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:00.529403   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.531822   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532022   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532163   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532294   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532443   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532481   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532488   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532612   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532680   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.532830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532830   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.532965   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.533103   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.609362   67489 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:00.636444   67489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:00.785916   67489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:00.792198   67489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:00.792279   67489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:00.812095   67489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:00.812124   67489 start.go:495] detecting cgroup driver to use...
	I1028 18:29:00.812190   67489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:00.829536   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:00.844021   67489 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:00.844090   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:00.858561   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:00.873128   67489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:00.990494   67489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:01.148650   67489 docker.go:233] disabling docker service ...
	I1028 18:29:01.148729   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:01.162487   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:01.177407   67489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:01.303665   67489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:01.430019   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:01.443822   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:01.462768   67489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:01.462830   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.473669   67489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:01.473737   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.484364   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.496220   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.507216   67489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:01.518848   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.534216   67489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.554294   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.565095   67489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:01.574547   67489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:01.574614   67489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:01.596531   67489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:01.606858   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:01.740272   67489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:01.844969   67489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:01.845053   67489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:01.850004   67489 start.go:563] Will wait 60s for crictl version
	I1028 18:29:01.850056   67489 ssh_runner.go:195] Run: which crictl
	I1028 18:29:01.854032   67489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:01.893281   67489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:01.893367   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.923557   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.956282   67489 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:00.551001   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Start
	I1028 18:29:00.551172   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring networks are active...
	I1028 18:29:00.551820   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network default is active
	I1028 18:29:00.552130   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network mk-embed-certs-021370 is active
	I1028 18:29:00.552482   66600 main.go:141] libmachine: (embed-certs-021370) Getting domain xml...
	I1028 18:29:00.553186   66600 main.go:141] libmachine: (embed-certs-021370) Creating domain...
	I1028 18:29:01.830016   66600 main.go:141] libmachine: (embed-certs-021370) Waiting to get IP...
	I1028 18:29:01.831046   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:01.831522   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:01.831630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:01.831518   68528 retry.go:31] will retry after 300.306268ms: waiting for machine to come up
	I1028 18:29:02.132901   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.133350   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.133383   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.133293   68528 retry.go:31] will retry after 383.232008ms: waiting for machine to come up
	I1028 18:29:02.518736   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.519274   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.519299   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.519241   68528 retry.go:31] will retry after 354.591942ms: waiting for machine to come up
	I1028 18:29:02.875813   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.876360   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.876397   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.876325   68528 retry.go:31] will retry after 529.444037ms: waiting for machine to come up
	I1028 18:28:58.895888   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:58.895918   66801 pod_ready.go:82] duration metric: took 1.005990705s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:58.895932   66801 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:00.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:02.903390   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:01.957748   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:01.960967   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:01.961382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961635   67489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:01.966300   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:01.979786   67489 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:01.979899   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:01.979957   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:02.020659   67489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:02.020716   67489 ssh_runner.go:195] Run: which lz4
	I1028 18:29:02.024772   67489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:02.030183   67489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:02.030206   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:03.449423   67489 crio.go:462] duration metric: took 1.424673911s to copy over tarball
	I1028 18:29:03.449498   67489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:01.447557   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.947946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.448522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.947533   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.447522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.948025   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.448136   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.948157   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.447635   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.947987   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.407835   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:03.408366   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:03.408390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:03.408265   68528 retry.go:31] will retry after 680.005296ms: waiting for machine to come up
	I1028 18:29:04.089802   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.090390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.090409   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.090338   68528 retry.go:31] will retry after 833.681725ms: waiting for machine to come up
	I1028 18:29:04.925788   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.926278   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.926298   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.926227   68528 retry.go:31] will retry after 1.050194845s: waiting for machine to come up
	I1028 18:29:05.978270   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:05.978715   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:05.978742   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:05.978669   68528 retry.go:31] will retry after 1.416773018s: waiting for machine to come up
	I1028 18:29:07.397367   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:07.397843   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:07.397876   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:07.397787   68528 retry.go:31] will retry after 1.621623459s: waiting for machine to come up
	I1028 18:29:04.903465   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:06.903931   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:05.622217   67489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172685001s)
	I1028 18:29:05.622253   67489 crio.go:469] duration metric: took 2.172801769s to extract the tarball
	I1028 18:29:05.622264   67489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:05.660585   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:05.705484   67489 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:05.705510   67489 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:05.705520   67489 kubeadm.go:934] updating node { 192.168.39.215 8444 v1.31.2 crio true true} ...
	I1028 18:29:05.705634   67489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-692033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:05.705725   67489 ssh_runner.go:195] Run: crio config
	I1028 18:29:05.760618   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:05.760649   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:05.760661   67489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:05.760690   67489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-692033 NodeName:default-k8s-diff-port-692033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:05.760858   67489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-692033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:05.760936   67489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:05.771392   67489 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:05.771464   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:05.780926   67489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1028 18:29:05.797951   67489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:05.814159   67489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1028 18:29:05.830723   67489 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:05.835163   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:05.847192   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:05.972201   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:05.990475   67489 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033 for IP: 192.168.39.215
	I1028 18:29:05.990492   67489 certs.go:194] generating shared ca certs ...
	I1028 18:29:05.990511   67489 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:05.990711   67489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:05.990764   67489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:05.990776   67489 certs.go:256] generating profile certs ...
	I1028 18:29:05.990875   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.key
	I1028 18:29:05.990991   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key.81b9981a
	I1028 18:29:05.991052   67489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key
	I1028 18:29:05.991218   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:05.991268   67489 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:05.991283   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:05.991317   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:05.991359   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:05.991405   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:05.991481   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:05.992294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:06.033938   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:06.070407   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:06.115934   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:06.144600   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 18:29:06.169202   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:06.196294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:06.219384   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:29:06.242169   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:06.266506   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:06.290175   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:06.313006   67489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:06.329076   67489 ssh_runner.go:195] Run: openssl version
	I1028 18:29:06.335322   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:06.346021   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350401   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350464   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.356134   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:06.366765   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:06.377486   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381920   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381978   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.387492   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:06.398392   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:06.413238   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418376   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418429   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.423997   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:06.436170   67489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:06.440853   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:06.446851   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:06.452980   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:06.458973   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:06.465088   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:06.470776   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:06.476462   67489 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:06.476588   67489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:06.476638   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.519820   67489 cri.go:89] found id: ""
	I1028 18:29:06.519884   67489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:06.530091   67489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:06.530110   67489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:06.530171   67489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:06.539807   67489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:06.540946   67489 kubeconfig.go:125] found "default-k8s-diff-port-692033" server: "https://192.168.39.215:8444"
	I1028 18:29:06.543088   67489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:06.552354   67489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I1028 18:29:06.552379   67489 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:06.552389   67489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:06.552445   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.586545   67489 cri.go:89] found id: ""
	I1028 18:29:06.586611   67489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:06.603418   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:06.612856   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:06.612876   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:06.612921   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:29:06.621852   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:06.621900   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:06.631132   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:29:06.640088   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:06.640158   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:06.651007   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.660034   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:06.660104   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.669587   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:29:06.678863   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:06.678937   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:06.688820   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:06.698470   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:06.820432   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.030810   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210339958s)
	I1028 18:29:08.030839   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.255000   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.321500   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.412775   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:08.412854   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.913648   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.413011   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.459009   67489 api_server.go:72] duration metric: took 1.046232596s to wait for apiserver process to appear ...
	I1028 18:29:09.459041   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:09.459062   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:09.459626   67489 api_server.go:269] stopped: https://192.168.39.215:8444/healthz: Get "https://192.168.39.215:8444/healthz": dial tcp 192.168.39.215:8444: connect: connection refused
	I1028 18:29:09.960128   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:06.447581   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.947550   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.447977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.947491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.447960   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.947662   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.448201   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.947753   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.448116   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.948175   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.020419   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:09.020867   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:09.020899   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:09.020814   68528 retry.go:31] will retry after 2.2230034s: waiting for machine to come up
	I1028 18:29:11.245136   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:11.245630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:11.245657   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:11.245595   68528 retry.go:31] will retry after 2.153898764s: waiting for machine to come up
	I1028 18:29:09.403596   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:11.903702   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:12.135346   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.135381   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.135394   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.166207   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.166234   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.459631   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.473153   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.473183   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:12.959778   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.969281   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.969320   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:13.459913   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:13.464362   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:29:13.471925   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:13.471953   67489 api_server.go:131] duration metric: took 4.012904227s to wait for apiserver health ...
	I1028 18:29:13.471964   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:13.471971   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:13.473908   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:13.475283   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:13.487393   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:13.532627   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:13.544945   67489 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:13.544982   67489 system_pods.go:61] "coredns-7c65d6cfc9-ctx9z" [7067f349-3a22-468d-bd9d-19d057eb43f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:13.544993   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [313161ff-f30f-4e25-978d-9aa2eba7fc44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:13.545004   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [e9a66e8e-946b-4365-bd63-3adfdd75e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:13.545014   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [0e682f68-2f9a-4bf3-bbe4-3a6b1ef6778d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:13.545021   67489 system_pods.go:61] "kube-proxy-86rll" [d34f46c6-3227-40c9-ac97-066b98bfce32] Running
	I1028 18:29:13.545029   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [b9058969-31e2-4249-862f-ef5de7784adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:13.545043   67489 system_pods.go:61] "metrics-server-6867b74b74-dz4nl" [833c650e-5f5d-46a1-9ae1-64619c53a92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:13.545047   67489 system_pods.go:61] "storage-provisioner" [342db8fa-7873-47b0-a5a6-52cde2e19d47] Running
	I1028 18:29:13.545053   67489 system_pods.go:74] duration metric: took 12.403166ms to wait for pod list to return data ...
	I1028 18:29:13.545060   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:13.548591   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:13.548619   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:13.548632   67489 node_conditions.go:105] duration metric: took 3.567222ms to run NodePressure ...
	I1028 18:29:13.548649   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:13.818718   67489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826139   67489 kubeadm.go:739] kubelet initialised
	I1028 18:29:13.826161   67489 kubeadm.go:740] duration metric: took 7.415257ms waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826170   67489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:13.833418   67489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.838793   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838820   67489 pod_ready.go:82] duration metric: took 5.377698ms for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.838831   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838840   67489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.843172   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843195   67489 pod_ready.go:82] duration metric: took 4.34633ms for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.843203   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843209   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.847581   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847615   67489 pod_ready.go:82] duration metric: took 4.389898ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.847630   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847642   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:11.448521   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.947592   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.448427   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.948413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.448390   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.948518   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.447929   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.948106   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.948236   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.401547   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:13.402054   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:13.402083   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:13.402028   68528 retry.go:31] will retry after 2.345507901s: waiting for machine to come up
	I1028 18:29:15.749122   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:15.749485   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:15.749502   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:15.749451   68528 retry.go:31] will retry after 2.974576274s: waiting for machine to come up
	I1028 18:29:13.903930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.403934   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:15.858338   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:18.354245   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.447535   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.948117   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.448197   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.948491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.948393   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.448406   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.947788   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.448100   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.947907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.727508   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.727990   66600 main.go:141] libmachine: (embed-certs-021370) Found IP for machine: 192.168.50.62
	I1028 18:29:18.728011   66600 main.go:141] libmachine: (embed-certs-021370) Reserving static IP address...
	I1028 18:29:18.728028   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has current primary IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.728447   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.728478   66600 main.go:141] libmachine: (embed-certs-021370) Reserved static IP address: 192.168.50.62
	I1028 18:29:18.728497   66600 main.go:141] libmachine: (embed-certs-021370) DBG | skip adding static IP to network mk-embed-certs-021370 - found existing host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"}
	I1028 18:29:18.728510   66600 main.go:141] libmachine: (embed-certs-021370) Waiting for SSH to be available...
	I1028 18:29:18.728520   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Getting to WaitForSSH function...
	I1028 18:29:18.730574   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731031   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.731069   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731227   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH client type: external
	I1028 18:29:18.731248   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa (-rw-------)
	I1028 18:29:18.731282   66600 main.go:141] libmachine: (embed-certs-021370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:29:18.731310   66600 main.go:141] libmachine: (embed-certs-021370) DBG | About to run SSH command:
	I1028 18:29:18.731327   66600 main.go:141] libmachine: (embed-certs-021370) DBG | exit 0
	I1028 18:29:18.860213   66600 main.go:141] libmachine: (embed-certs-021370) DBG | SSH cmd err, output: <nil>: 
	I1028 18:29:18.860619   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetConfigRaw
	I1028 18:29:18.861235   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:18.863576   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.863932   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.863956   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.864224   66600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/config.json ...
	I1028 18:29:18.864465   66600 machine.go:93] provisionDockerMachine start ...
	I1028 18:29:18.864521   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:18.864720   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.866951   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867314   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.867349   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867511   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.867665   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867811   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867941   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.868072   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.868230   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.868239   66600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:29:18.972695   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:29:18.972729   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.972970   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:29:18.973000   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.973209   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.975608   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.975889   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.975915   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.976082   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.976269   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976401   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976505   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.976625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.976796   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.976809   66600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-021370 && echo "embed-certs-021370" | sudo tee /etc/hostname
	I1028 18:29:19.094622   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-021370
	
	I1028 18:29:19.094655   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.097110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097436   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.097460   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097639   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.097817   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.097967   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.098121   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.098309   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.098517   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.098533   66600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-021370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-021370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-021370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:29:19.218088   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:29:19.218112   66600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:29:19.218140   66600 buildroot.go:174] setting up certificates
	I1028 18:29:19.218150   66600 provision.go:84] configureAuth start
	I1028 18:29:19.218159   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:19.218411   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:19.221093   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221441   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.221469   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221641   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.223628   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.223908   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.223928   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.224085   66600 provision.go:143] copyHostCerts
	I1028 18:29:19.224155   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:29:19.224185   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:29:19.224252   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:29:19.224380   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:29:19.224390   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:29:19.224422   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:29:19.224532   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:29:19.224542   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:29:19.224570   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:29:19.224655   66600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-021370 san=[127.0.0.1 192.168.50.62 embed-certs-021370 localhost minikube]
	I1028 18:29:19.402860   66600 provision.go:177] copyRemoteCerts
	I1028 18:29:19.402925   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:29:19.402954   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.405556   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.405904   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.405939   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.406100   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.406265   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.406391   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.406494   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.486543   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:19.510790   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:29:19.534037   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:29:19.557509   66600 provision.go:87] duration metric: took 339.349044ms to configureAuth
	I1028 18:29:19.557531   66600 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:19.557681   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:19.557745   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.560240   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560594   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.560623   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560757   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.560931   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561110   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561320   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.561490   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.561651   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.561664   66600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:19.781270   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:19.781304   66600 machine.go:96] duration metric: took 916.814114ms to provisionDockerMachine
	I1028 18:29:19.781317   66600 start.go:293] postStartSetup for "embed-certs-021370" (driver="kvm2")
	I1028 18:29:19.781327   66600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:19.781345   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:19.781664   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:19.781690   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.784176   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784509   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.784538   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784667   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.784854   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.785028   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.785171   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.867396   66600 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:19.871516   66600 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:19.871542   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:19.871630   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:19.871717   66600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:19.871799   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:19.882017   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:19.906531   66600 start.go:296] duration metric: took 125.203636ms for postStartSetup
	I1028 18:29:19.906562   66600 fix.go:56] duration metric: took 19.381205641s for fixHost
	I1028 18:29:19.906581   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.909285   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909610   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.909640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909778   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.909980   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910311   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910444   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.910625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.910788   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.910803   66600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:20.017311   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140159.989127147
	
	I1028 18:29:20.017339   66600 fix.go:216] guest clock: 1730140159.989127147
	I1028 18:29:20.017346   66600 fix.go:229] Guest: 2024-10-28 18:29:19.989127147 +0000 UTC Remote: 2024-10-28 18:29:19.906566181 +0000 UTC m=+356.890524496 (delta=82.560966ms)
	I1028 18:29:20.017368   66600 fix.go:200] guest clock delta is within tolerance: 82.560966ms
	I1028 18:29:20.017374   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 19.492049852s
	I1028 18:29:20.017396   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.017657   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:20.020286   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020680   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.020704   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020816   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021307   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021491   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021577   66600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:20.021616   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.021746   66600 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:20.021767   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.024157   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024429   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024511   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024533   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024679   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.024856   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.024880   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024896   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.025019   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025070   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.025160   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.025201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.025304   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025443   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.101316   66600 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:20.124859   66600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:20.268773   66600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:20.275277   66600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:20.275358   66600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:20.291972   66600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:20.291999   66600 start.go:495] detecting cgroup driver to use...
	I1028 18:29:20.292066   66600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:20.311389   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:20.325385   66600 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:20.325434   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:20.339246   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:20.353759   66600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:20.477639   66600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:20.622752   66600 docker.go:233] disabling docker service ...
	I1028 18:29:20.622825   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:20.637258   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:20.650210   66600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:20.801036   66600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:20.945078   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:20.959494   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:20.977797   66600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:20.977854   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.987991   66600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:20.988038   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.998188   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.008502   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.018540   66600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:21.028663   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.038758   66600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.056298   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.067136   66600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:21.076859   66600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:21.076906   66600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:21.090468   66600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:21.099951   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:21.226675   66600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:21.321993   66600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:21.322074   66600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:21.327981   66600 start.go:563] Will wait 60s for crictl version
	I1028 18:29:21.328028   66600 ssh_runner.go:195] Run: which crictl
	I1028 18:29:21.331673   66600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:21.369066   66600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:21.369168   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.396873   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.426233   66600 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:21.427570   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:21.430207   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430560   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:21.430582   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430732   66600 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:21.435293   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:21.447885   66600 kubeadm.go:883] updating cluster {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:21.447989   66600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:21.448067   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:21.488401   66600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:21.488488   66600 ssh_runner.go:195] Run: which lz4
	I1028 18:29:21.492578   66600 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:21.496531   66600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:21.496560   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:22.824198   66600 crio.go:462] duration metric: took 1.331643546s to copy over tarball
	I1028 18:29:22.824276   66600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:18.902233   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.902721   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.904121   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.354850   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.355961   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:24.854445   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:21.447903   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.948305   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.448529   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.947708   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.447881   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.947572   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.448433   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.948299   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.447748   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.947863   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.906928   66600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082617931s)
	I1028 18:29:24.906959   66600 crio.go:469] duration metric: took 2.082732511s to extract the tarball
	I1028 18:29:24.906968   66600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:24.944094   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:24.991024   66600 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:24.991048   66600 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:24.991057   66600 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.31.2 crio true true} ...
	I1028 18:29:24.991175   66600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-021370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:24.991262   66600 ssh_runner.go:195] Run: crio config
	I1028 18:29:25.034609   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:25.034629   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:25.034639   66600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:25.034657   66600 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-021370 NodeName:embed-certs-021370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:25.034803   66600 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-021370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:25.034858   66600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:25.044587   66600 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:25.044661   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:25.054150   66600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 18:29:25.070100   66600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:25.085866   66600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1028 18:29:25.101932   66600 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:25.105817   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:25.117399   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:25.235698   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:25.251517   66600 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370 for IP: 192.168.50.62
	I1028 18:29:25.251536   66600 certs.go:194] generating shared ca certs ...
	I1028 18:29:25.251549   66600 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:25.251701   66600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:25.251758   66600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:25.251771   66600 certs.go:256] generating profile certs ...
	I1028 18:29:25.251871   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/client.key
	I1028 18:29:25.251951   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key.1a2ee1e7
	I1028 18:29:25.252010   66600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key
	I1028 18:29:25.252184   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:25.252213   66600 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:25.252222   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:25.252246   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:25.252271   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:25.252291   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:25.252328   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:25.252968   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:25.280370   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:25.323757   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:25.356813   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:25.395729   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 18:29:25.428768   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:25.459929   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:25.485206   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:29:25.514312   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:25.537007   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:25.559926   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:25.582419   66600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:25.599284   66600 ssh_runner.go:195] Run: openssl version
	I1028 18:29:25.605132   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:25.615576   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619856   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619911   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.625516   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:25.636185   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:25.646664   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650958   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650998   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.657176   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:25.668490   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:25.679608   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.683993   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.684041   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.689729   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:25.700817   66600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:25.705214   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:25.711351   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:25.717172   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:25.722879   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:25.728415   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:25.733859   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:25.739422   66600 kubeadm.go:392] StartCluster: {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:25.739492   66600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:25.739534   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.779869   66600 cri.go:89] found id: ""
	I1028 18:29:25.779926   66600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:25.790753   66600 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:25.790771   66600 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:25.790811   66600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:25.800588   66600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:25.801624   66600 kubeconfig.go:125] found "embed-certs-021370" server: "https://192.168.50.62:8443"
	I1028 18:29:25.803466   66600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:25.813212   66600 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I1028 18:29:25.813240   66600 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:25.813254   66600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:25.813312   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.848911   66600 cri.go:89] found id: ""
	I1028 18:29:25.848976   66600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:25.866165   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:25.876454   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:25.876485   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:25.876539   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:29:25.886746   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:25.886802   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:25.897486   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:29:25.907828   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:25.907881   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:25.917520   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.926896   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:25.926950   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.937184   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:29:25.946539   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:25.946585   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:25.956520   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:25.968541   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:26.077716   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.298743   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.220990469s)
	I1028 18:29:27.298777   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.517286   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.582890   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.648091   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:27.648159   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.402969   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:27.405049   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.356621   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.356642   67489 pod_ready.go:82] duration metric: took 12.508989427s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.356653   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361609   67489 pod_ready.go:93] pod "kube-proxy-86rll" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.361627   67489 pod_ready.go:82] duration metric: took 4.968039ms for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361635   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365430   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.365449   67489 pod_ready.go:82] duration metric: took 3.807327ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365460   67489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:28.373442   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.448386   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.948082   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.447496   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.948285   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.947683   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.447813   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.947810   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.448413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.947477   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.148668   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.648320   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.148392   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.648218   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.682858   66600 api_server.go:72] duration metric: took 2.034774456s to wait for apiserver process to appear ...
	I1028 18:29:29.682888   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:29.682915   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:29.683457   66600 api_server.go:269] stopped: https://192.168.50.62:8443/healthz: Get "https://192.168.50.62:8443/healthz": dial tcp 192.168.50.62:8443: connect: connection refused
	I1028 18:29:30.182997   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.878280   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.878304   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:32.878318   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.942789   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.942828   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:29.903158   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:32.404024   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.183344   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.187337   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.187362   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:33.683288   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.687653   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.687680   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:34.183190   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:34.187671   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:29:34.195909   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:34.195938   66600 api_server.go:131] duration metric: took 4.51303648s to wait for apiserver health ...
	I1028 18:29:34.195950   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:34.195959   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:34.197469   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:30.872450   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.372710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:31.448099   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.948269   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.447660   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.947559   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.447716   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.948569   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.447555   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.947612   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.448411   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.947786   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.198803   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:34.221645   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:34.250694   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:34.261167   66600 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:34.261211   66600 system_pods.go:61] "coredns-7c65d6cfc9-bdtd8" [e1fff57c-ba57-4592-9049-7cc80a6f67a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:34.261229   66600 system_pods.go:61] "etcd-embed-certs-021370" [0c805e30-b6d8-416c-97af-c33b142b46e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:34.261240   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [244e08f7-7e8c-4547-b145-9816374fe582] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:34.261251   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [c08dc68e-d441-4d96-8377-957c381c4ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:34.261265   66600 system_pods.go:61] "kube-proxy-7g7lr" [828a4297-7703-46a7-bffe-c8daf83ef4bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:29:34.261277   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [2bc3fea6-0f01-43e9-b69e-deb26980e658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:34.261286   66600 system_pods.go:61] "metrics-server-6867b74b74-gg8bl" [599d8cf3-717d-46b2-a5ba-43e00f46829b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:34.261296   66600 system_pods.go:61] "storage-provisioner" [ad047e20-2de9-447c-83bc-8b835292a25f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:29:34.261307   66600 system_pods.go:74] duration metric: took 10.589505ms to wait for pod list to return data ...
	I1028 18:29:34.261319   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:34.265041   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:34.265066   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:34.265079   66600 node_conditions.go:105] duration metric: took 3.75485ms to run NodePressure ...
	I1028 18:29:34.265098   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:34.567509   66600 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571573   66600 kubeadm.go:739] kubelet initialised
	I1028 18:29:34.571592   66600 kubeadm.go:740] duration metric: took 4.056877ms waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571599   66600 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:34.576872   66600 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:36.586357   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:34.901383   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.902526   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:35.871154   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:37.873138   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.447566   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.947886   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.448276   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.948547   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.447546   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.947974   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.448334   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.948183   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.448396   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.947620   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.083269   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.083414   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:41.083443   66600 pod_ready.go:82] duration metric: took 6.506548177s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:41.083453   66600 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:39.401480   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.402426   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:40.370529   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:42.371580   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:44.372259   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.448306   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.947486   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.448219   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.948295   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.447765   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.947468   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.448454   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.947488   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.447568   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.948070   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.089927   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.589484   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.594775   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:43.403246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.403595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.902160   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.872441   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.371650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.448123   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.948178   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.447989   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.947888   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.448230   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.947692   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.448090   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.947996   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.447949   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.947977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.089584   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.089607   66600 pod_ready.go:82] duration metric: took 7.006147079s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.089619   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093940   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.093959   66600 pod_ready.go:82] duration metric: took 4.332474ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093969   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098279   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.098295   66600 pod_ready.go:82] duration metric: took 4.319206ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098304   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102326   66600 pod_ready.go:93] pod "kube-proxy-7g7lr" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.102341   66600 pod_ready.go:82] duration metric: took 4.03162ms for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102349   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106249   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.106265   66600 pod_ready.go:82] duration metric: took 3.910208ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106279   66600 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:50.112678   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:52.113794   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.902296   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.902424   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.371741   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:53.371833   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.448130   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.948450   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:51.948545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:51.987428   67149 cri.go:89] found id: ""
	I1028 18:29:51.987459   67149 logs.go:282] 0 containers: []
	W1028 18:29:51.987470   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:51.987478   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:51.987534   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:52.021429   67149 cri.go:89] found id: ""
	I1028 18:29:52.021452   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.021460   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:52.021466   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:52.021509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:52.055338   67149 cri.go:89] found id: ""
	I1028 18:29:52.055362   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.055373   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:52.055380   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:52.055432   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:52.088673   67149 cri.go:89] found id: ""
	I1028 18:29:52.088697   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.088705   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:52.088711   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:52.088766   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:52.129833   67149 cri.go:89] found id: ""
	I1028 18:29:52.129854   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.129862   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:52.129867   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:52.129918   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:52.162994   67149 cri.go:89] found id: ""
	I1028 18:29:52.163029   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.163040   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:52.163047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:52.163105   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:52.196819   67149 cri.go:89] found id: ""
	I1028 18:29:52.196840   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.196848   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:52.196853   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:52.196906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:52.232924   67149 cri.go:89] found id: ""
	I1028 18:29:52.232955   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.232965   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:52.232977   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:52.232992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:52.283317   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:52.283353   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:52.296648   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:52.296673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:52.423396   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:52.423418   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:52.423429   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:52.497671   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:52.497704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:55.037920   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:55.052539   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:55.052602   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:55.089302   67149 cri.go:89] found id: ""
	I1028 18:29:55.089332   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.089343   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:55.089351   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:55.089404   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:55.127317   67149 cri.go:89] found id: ""
	I1028 18:29:55.127345   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.127352   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:55.127358   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:55.127413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:55.161689   67149 cri.go:89] found id: ""
	I1028 18:29:55.161714   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.161721   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:55.161727   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:55.161772   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:55.196494   67149 cri.go:89] found id: ""
	I1028 18:29:55.196521   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.196534   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:55.196542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:55.196596   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:55.234980   67149 cri.go:89] found id: ""
	I1028 18:29:55.235008   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.235020   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:55.235028   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:55.235086   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:55.274750   67149 cri.go:89] found id: ""
	I1028 18:29:55.274775   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.274783   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:55.274789   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:55.274842   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:55.309839   67149 cri.go:89] found id: ""
	I1028 18:29:55.309865   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.309874   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:55.309881   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:55.309943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:55.358765   67149 cri.go:89] found id: ""
	I1028 18:29:55.358793   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.358805   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:55.358816   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:55.358830   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:55.422821   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:55.422869   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:55.439458   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:55.439482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:55.507743   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:55.507764   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:55.507775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:55.582679   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:55.582710   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:54.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.612967   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:54.402722   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.902816   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:55.372539   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:57.871444   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:58.124907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:58.139125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:58.139181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:58.178829   67149 cri.go:89] found id: ""
	I1028 18:29:58.178853   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.178864   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:58.178871   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:58.178933   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:58.212290   67149 cri.go:89] found id: ""
	I1028 18:29:58.212320   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.212336   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:58.212344   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:58.212402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:58.246108   67149 cri.go:89] found id: ""
	I1028 18:29:58.246135   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.246145   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:58.246152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:58.246212   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:58.280625   67149 cri.go:89] found id: ""
	I1028 18:29:58.280651   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.280662   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:58.280670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:58.280727   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:58.318755   67149 cri.go:89] found id: ""
	I1028 18:29:58.318783   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.318793   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:58.318801   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:58.318853   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:58.356452   67149 cri.go:89] found id: ""
	I1028 18:29:58.356487   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.356499   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:58.356506   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:58.356564   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:58.389906   67149 cri.go:89] found id: ""
	I1028 18:29:58.389928   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.389936   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:58.389943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:58.390001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:58.425883   67149 cri.go:89] found id: ""
	I1028 18:29:58.425911   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.425920   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:58.425929   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:58.425943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:58.484392   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:58.484433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:58.498133   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:58.498159   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:58.572358   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:58.572382   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:58.572397   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:58.654963   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:58.654997   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:58.613408   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.614235   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:59.402355   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.403000   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.370479   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:02.370951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:04.372159   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.196593   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:01.209622   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:01.209693   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:01.243682   67149 cri.go:89] found id: ""
	I1028 18:30:01.243708   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.243718   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:01.243726   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:01.243786   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:01.277617   67149 cri.go:89] found id: ""
	I1028 18:30:01.277646   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.277654   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:01.277660   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:01.277710   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:01.314028   67149 cri.go:89] found id: ""
	I1028 18:30:01.314055   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.314067   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:01.314081   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:01.314152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:01.350324   67149 cri.go:89] found id: ""
	I1028 18:30:01.350348   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.350356   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:01.350362   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:01.350415   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:01.385802   67149 cri.go:89] found id: ""
	I1028 18:30:01.385826   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.385834   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:01.385840   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:01.385883   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:01.421507   67149 cri.go:89] found id: ""
	I1028 18:30:01.421534   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.421545   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:01.421553   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:01.421611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:01.457285   67149 cri.go:89] found id: ""
	I1028 18:30:01.457314   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.457326   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:01.457333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:01.457380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:01.490962   67149 cri.go:89] found id: ""
	I1028 18:30:01.490984   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.490992   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:01.491000   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:01.491012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:01.559906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:01.559937   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:01.559962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:01.639455   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:01.639485   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.681968   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:01.681994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:01.736639   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:01.736672   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.251876   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:04.265639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:04.265711   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:04.300133   67149 cri.go:89] found id: ""
	I1028 18:30:04.300159   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.300167   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:04.300173   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:04.300228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:04.335723   67149 cri.go:89] found id: ""
	I1028 18:30:04.335749   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.335760   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:04.335767   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:04.335825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:04.373009   67149 cri.go:89] found id: ""
	I1028 18:30:04.373030   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.373040   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:04.373048   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:04.373113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:04.405969   67149 cri.go:89] found id: ""
	I1028 18:30:04.405993   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.406003   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:04.406011   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:04.406066   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:04.441067   67149 cri.go:89] found id: ""
	I1028 18:30:04.441095   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.441106   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:04.441112   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:04.441176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:04.475231   67149 cri.go:89] found id: ""
	I1028 18:30:04.475260   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.475270   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:04.475277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:04.475342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:04.512970   67149 cri.go:89] found id: ""
	I1028 18:30:04.512998   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.513009   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:04.513017   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:04.513078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:04.547857   67149 cri.go:89] found id: ""
	I1028 18:30:04.547880   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.547890   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:04.547901   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:04.547913   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:04.598870   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:04.598900   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.612678   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:04.612705   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:04.686945   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:04.686967   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:04.686979   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:04.764943   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:04.764992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:03.113309   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.113449   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.613568   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:03.902735   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.903116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:06.872012   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:09.371576   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.310905   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:07.323880   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:07.323946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:07.363597   67149 cri.go:89] found id: ""
	I1028 18:30:07.363626   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.363637   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:07.363645   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:07.363706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:07.401051   67149 cri.go:89] found id: ""
	I1028 18:30:07.401073   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.401082   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:07.401089   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:07.401147   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:07.439710   67149 cri.go:89] found id: ""
	I1028 18:30:07.439735   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.439743   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:07.439748   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:07.439796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:07.476627   67149 cri.go:89] found id: ""
	I1028 18:30:07.476653   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.476663   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:07.476670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:07.476747   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:07.508770   67149 cri.go:89] found id: ""
	I1028 18:30:07.508796   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.508807   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:07.508814   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:07.508874   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:07.543467   67149 cri.go:89] found id: ""
	I1028 18:30:07.543496   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.543506   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:07.543514   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:07.543575   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:07.577181   67149 cri.go:89] found id: ""
	I1028 18:30:07.577204   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.577212   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:07.577217   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:07.577266   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:07.611862   67149 cri.go:89] found id: ""
	I1028 18:30:07.611886   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.611896   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:07.611906   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:07.611924   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:07.699794   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:07.699833   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.747920   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:07.747948   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:07.797402   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:07.797434   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:07.811752   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:07.811778   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:07.881604   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.382191   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:10.394572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:10.394624   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:10.428941   67149 cri.go:89] found id: ""
	I1028 18:30:10.428973   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.428984   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:10.429004   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:10.429071   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:10.462526   67149 cri.go:89] found id: ""
	I1028 18:30:10.462558   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.462569   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:10.462578   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:10.462641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:10.498472   67149 cri.go:89] found id: ""
	I1028 18:30:10.498495   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.498503   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:10.498509   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:10.498557   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:10.535400   67149 cri.go:89] found id: ""
	I1028 18:30:10.535422   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.535430   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:10.535436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:10.535483   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:10.568961   67149 cri.go:89] found id: ""
	I1028 18:30:10.568981   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.568988   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:10.568994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:10.569041   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:10.601273   67149 cri.go:89] found id: ""
	I1028 18:30:10.601306   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.601318   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:10.601325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:10.601383   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:10.638093   67149 cri.go:89] found id: ""
	I1028 18:30:10.638124   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.638135   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:10.638141   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:10.638203   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:10.674624   67149 cri.go:89] found id: ""
	I1028 18:30:10.674654   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.674665   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:10.674675   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:10.674688   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:10.714568   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:10.714602   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:10.764732   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:10.764765   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:10.778111   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:10.778139   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:10.854488   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.854516   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:10.854531   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:10.113469   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.614268   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:08.401958   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:10.402159   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.402379   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:11.872789   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.372947   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:13.438803   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:13.452322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:13.452397   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:13.487337   67149 cri.go:89] found id: ""
	I1028 18:30:13.487360   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.487369   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:13.487381   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:13.487488   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:13.521992   67149 cri.go:89] found id: ""
	I1028 18:30:13.522024   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.522034   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:13.522041   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:13.522099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:13.555315   67149 cri.go:89] found id: ""
	I1028 18:30:13.555347   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.555363   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:13.555371   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:13.555431   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:13.589401   67149 cri.go:89] found id: ""
	I1028 18:30:13.589425   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.589436   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:13.589445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:13.589493   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:13.629340   67149 cri.go:89] found id: ""
	I1028 18:30:13.629370   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.629385   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:13.629393   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:13.629454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:13.667307   67149 cri.go:89] found id: ""
	I1028 18:30:13.667337   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.667348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:13.667355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:13.667418   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:13.701457   67149 cri.go:89] found id: ""
	I1028 18:30:13.701513   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.701526   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:13.701536   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:13.701594   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:13.737989   67149 cri.go:89] found id: ""
	I1028 18:30:13.738023   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.738033   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:13.738043   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:13.738056   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:13.791743   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:13.791777   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:13.805501   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:13.805529   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:13.882239   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:13.882262   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:13.882276   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.963480   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:13.963516   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:15.112587   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:17.113242   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.901879   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.902869   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.871650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:18.872448   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.502799   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:16.516397   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:16.516456   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:16.551670   67149 cri.go:89] found id: ""
	I1028 18:30:16.551701   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.551712   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:16.551719   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:16.551771   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:16.584390   67149 cri.go:89] found id: ""
	I1028 18:30:16.584417   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.584428   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:16.584435   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:16.584510   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:16.620868   67149 cri.go:89] found id: ""
	I1028 18:30:16.620892   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.620899   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:16.620904   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:16.620949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:16.654189   67149 cri.go:89] found id: ""
	I1028 18:30:16.654216   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.654225   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:16.654231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:16.654284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:16.694526   67149 cri.go:89] found id: ""
	I1028 18:30:16.694557   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.694568   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:16.694575   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:16.694640   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:16.728857   67149 cri.go:89] found id: ""
	I1028 18:30:16.728884   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.728892   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:16.728898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:16.728948   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:16.763198   67149 cri.go:89] found id: ""
	I1028 18:30:16.763220   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.763227   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:16.763232   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:16.763282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:16.800120   67149 cri.go:89] found id: ""
	I1028 18:30:16.800142   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.800149   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:16.800157   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:16.800167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:16.852710   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:16.852736   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:16.867365   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:16.867395   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:16.945605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:16.945627   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:16.945643   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:17.022838   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:17.022871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.563585   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:19.577612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:19.577683   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:19.615797   67149 cri.go:89] found id: ""
	I1028 18:30:19.615820   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.615829   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:19.615836   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:19.615882   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:19.654780   67149 cri.go:89] found id: ""
	I1028 18:30:19.654802   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.654810   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:19.654816   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:19.654873   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:19.693502   67149 cri.go:89] found id: ""
	I1028 18:30:19.693532   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.693542   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:19.693550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:19.693611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:19.731869   67149 cri.go:89] found id: ""
	I1028 18:30:19.731902   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.731910   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:19.731916   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:19.731974   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:19.765046   67149 cri.go:89] found id: ""
	I1028 18:30:19.765081   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.765092   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:19.765099   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:19.765158   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:19.798082   67149 cri.go:89] found id: ""
	I1028 18:30:19.798105   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.798113   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:19.798119   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:19.798172   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:19.832562   67149 cri.go:89] found id: ""
	I1028 18:30:19.832590   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.832601   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:19.832608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:19.832676   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:19.867213   67149 cri.go:89] found id: ""
	I1028 18:30:19.867240   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.867251   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:19.867260   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:19.867277   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:19.942276   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:19.942304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.977642   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:19.977671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:20.027077   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:20.027109   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:20.040159   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:20.040181   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:20.113350   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:19.113850   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.613505   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:19.402671   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.902317   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.372438   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.872137   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:22.614379   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:22.628550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:22.628607   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:22.662647   67149 cri.go:89] found id: ""
	I1028 18:30:22.662670   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.662677   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:22.662683   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:22.662732   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:22.696697   67149 cri.go:89] found id: ""
	I1028 18:30:22.696736   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.696747   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:22.696753   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:22.696815   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:22.730011   67149 cri.go:89] found id: ""
	I1028 18:30:22.730039   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.730049   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:22.730056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:22.730114   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:22.766604   67149 cri.go:89] found id: ""
	I1028 18:30:22.766629   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.766639   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:22.766647   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:22.766703   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:22.800581   67149 cri.go:89] found id: ""
	I1028 18:30:22.800608   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.800617   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:22.800625   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:22.800692   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:22.832742   67149 cri.go:89] found id: ""
	I1028 18:30:22.832767   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.832775   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:22.832780   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:22.832823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:22.865850   67149 cri.go:89] found id: ""
	I1028 18:30:22.865876   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.865885   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:22.865892   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:22.865949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:22.904410   67149 cri.go:89] found id: ""
	I1028 18:30:22.904433   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.904443   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:22.904454   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:22.904482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:22.959275   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:22.959310   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:22.972630   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:22.972652   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:23.043851   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:23.043873   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:23.043886   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:23.121657   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:23.121686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:25.662109   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:25.676366   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:25.676443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:25.715192   67149 cri.go:89] found id: ""
	I1028 18:30:25.715216   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.715224   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:25.715230   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:25.715283   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:25.754736   67149 cri.go:89] found id: ""
	I1028 18:30:25.754765   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.754773   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:25.754779   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:25.754823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:25.794179   67149 cri.go:89] found id: ""
	I1028 18:30:25.794207   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.794216   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:25.794224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:25.794278   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:25.833206   67149 cri.go:89] found id: ""
	I1028 18:30:25.833238   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.833246   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:25.833252   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:25.833298   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:25.871628   67149 cri.go:89] found id: ""
	I1028 18:30:25.871659   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.871669   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:25.871677   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:25.871735   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:25.910900   67149 cri.go:89] found id: ""
	I1028 18:30:25.910924   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.910934   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:25.910942   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:25.911001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:25.943972   67149 cri.go:89] found id: ""
	I1028 18:30:25.943992   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.943999   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:25.944004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:25.944059   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:25.982521   67149 cri.go:89] found id: ""
	I1028 18:30:25.982544   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.982551   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:25.982559   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:25.982569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:26.033003   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:26.033031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:26.046480   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:26.046503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 18:30:24.112244   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.113815   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.902652   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.402135   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:25.873075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.372129   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	W1028 18:30:26.117194   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:26.117213   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:26.117230   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:26.195399   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:26.195430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:28.737237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:28.751846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:28.751910   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:28.794259   67149 cri.go:89] found id: ""
	I1028 18:30:28.794290   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.794301   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:28.794308   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:28.794374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:28.827573   67149 cri.go:89] found id: ""
	I1028 18:30:28.827603   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.827611   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:28.827616   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:28.827671   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:28.860676   67149 cri.go:89] found id: ""
	I1028 18:30:28.860702   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.860713   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:28.860721   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:28.860780   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:28.897302   67149 cri.go:89] found id: ""
	I1028 18:30:28.897327   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.897343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:28.897351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:28.897410   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:28.933425   67149 cri.go:89] found id: ""
	I1028 18:30:28.933454   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.933464   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:28.933471   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:28.933535   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:28.966004   67149 cri.go:89] found id: ""
	I1028 18:30:28.966032   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.966043   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:28.966051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:28.966107   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:29.002788   67149 cri.go:89] found id: ""
	I1028 18:30:29.002818   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.002829   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:29.002835   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:29.002894   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:29.033351   67149 cri.go:89] found id: ""
	I1028 18:30:29.033379   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.033389   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:29.033400   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:29.033420   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:29.107997   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:29.108025   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:29.144727   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:29.144753   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:29.206487   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:29.206521   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:29.219722   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:29.219744   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:29.288254   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:28.612485   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.113113   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.902960   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.871338   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.372081   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.789035   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:31.802587   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:31.802650   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:31.838372   67149 cri.go:89] found id: ""
	I1028 18:30:31.838401   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.838410   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:31.838416   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:31.838469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:31.877794   67149 cri.go:89] found id: ""
	I1028 18:30:31.877822   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.877833   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:31.877840   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:31.877896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:31.917442   67149 cri.go:89] found id: ""
	I1028 18:30:31.917472   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.917483   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:31.917490   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:31.917549   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:31.951900   67149 cri.go:89] found id: ""
	I1028 18:30:31.951931   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.951943   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:31.951951   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:31.952008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:31.988011   67149 cri.go:89] found id: ""
	I1028 18:30:31.988040   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.988051   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:31.988058   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:31.988116   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:32.021042   67149 cri.go:89] found id: ""
	I1028 18:30:32.021063   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.021071   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:32.021077   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:32.021124   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:32.053748   67149 cri.go:89] found id: ""
	I1028 18:30:32.053770   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.053778   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:32.053783   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:32.053837   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:32.089725   67149 cri.go:89] found id: ""
	I1028 18:30:32.089756   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.089766   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:32.089777   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:32.089790   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:32.140000   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:32.140031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:32.154023   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:32.154046   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:32.231222   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:32.231242   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:32.231255   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:32.311354   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:32.311388   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:34.852507   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:34.867133   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:34.867198   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:34.901201   67149 cri.go:89] found id: ""
	I1028 18:30:34.901228   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.901238   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:34.901245   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:34.901300   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:34.962788   67149 cri.go:89] found id: ""
	I1028 18:30:34.962814   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.962824   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:34.962835   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:34.962896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:34.996879   67149 cri.go:89] found id: ""
	I1028 18:30:34.996906   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.996917   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:34.996926   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:34.996986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:35.033516   67149 cri.go:89] found id: ""
	I1028 18:30:35.033541   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.033553   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:35.033560   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:35.033622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:35.066903   67149 cri.go:89] found id: ""
	I1028 18:30:35.066933   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.066945   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:35.066953   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:35.067010   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:35.099675   67149 cri.go:89] found id: ""
	I1028 18:30:35.099697   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.099704   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:35.099710   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:35.099755   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:35.133595   67149 cri.go:89] found id: ""
	I1028 18:30:35.133623   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.133633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:35.133641   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:35.133699   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:35.172236   67149 cri.go:89] found id: ""
	I1028 18:30:35.172262   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.172272   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:35.172282   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:35.172296   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:35.224952   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:35.224981   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:35.238554   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:35.238578   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:35.318991   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:35.319024   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:35.319040   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:35.399763   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:35.399799   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:33.612446   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.613847   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.402375   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.402653   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.902346   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:38.372413   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.947847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:37.963147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:37.963210   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.001768   67149 cri.go:89] found id: ""
	I1028 18:30:38.001792   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.001802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:38.001809   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:38.001868   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:38.042877   67149 cri.go:89] found id: ""
	I1028 18:30:38.042905   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.042916   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:38.042924   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:38.042986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:38.078116   67149 cri.go:89] found id: ""
	I1028 18:30:38.078143   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.078154   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:38.078162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:38.078226   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:38.111082   67149 cri.go:89] found id: ""
	I1028 18:30:38.111108   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.111119   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:38.111127   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:38.111187   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:38.144863   67149 cri.go:89] found id: ""
	I1028 18:30:38.144889   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.144898   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:38.144906   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:38.144962   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:38.178671   67149 cri.go:89] found id: ""
	I1028 18:30:38.178701   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.178712   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:38.178719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:38.178774   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:38.218441   67149 cri.go:89] found id: ""
	I1028 18:30:38.218464   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.218472   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:38.218477   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:38.218528   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:38.252697   67149 cri.go:89] found id: ""
	I1028 18:30:38.252719   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.252727   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:38.252736   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:38.252745   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:38.304813   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:38.304853   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:38.318437   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:38.318462   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:38.389959   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:38.389987   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:38.390002   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:38.471462   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:38.471495   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:41.013647   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:41.027167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:41.027233   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.113426   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:39.903261   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.402381   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.871193   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.873502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:41.062559   67149 cri.go:89] found id: ""
	I1028 18:30:41.062590   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.062601   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:41.062609   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:41.062667   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:41.097732   67149 cri.go:89] found id: ""
	I1028 18:30:41.097758   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.097767   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:41.097773   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:41.097819   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:41.133067   67149 cri.go:89] found id: ""
	I1028 18:30:41.133089   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.133097   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:41.133102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:41.133150   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:41.168640   67149 cri.go:89] found id: ""
	I1028 18:30:41.168674   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.168684   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:41.168691   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:41.168754   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:41.206429   67149 cri.go:89] found id: ""
	I1028 18:30:41.206453   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.206463   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:41.206470   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:41.206527   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:41.248326   67149 cri.go:89] found id: ""
	I1028 18:30:41.248350   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.248360   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:41.248369   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:41.248429   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:41.283703   67149 cri.go:89] found id: ""
	I1028 18:30:41.283734   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.283746   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:41.283753   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:41.283810   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:41.327759   67149 cri.go:89] found id: ""
	I1028 18:30:41.327786   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.327796   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:41.327807   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:41.327820   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:41.388563   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:41.388593   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:41.406411   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:41.406435   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:41.490605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:41.490626   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:41.490637   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:41.569386   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:41.569433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.109394   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:44.123047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:44.123113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:44.156762   67149 cri.go:89] found id: ""
	I1028 18:30:44.156792   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.156802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:44.156810   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:44.156867   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:44.192244   67149 cri.go:89] found id: ""
	I1028 18:30:44.192271   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.192282   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:44.192289   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:44.192357   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:44.224059   67149 cri.go:89] found id: ""
	I1028 18:30:44.224094   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.224101   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:44.224115   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:44.224168   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:44.258750   67149 cri.go:89] found id: ""
	I1028 18:30:44.258779   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.258789   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:44.258797   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:44.258854   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:44.295600   67149 cri.go:89] found id: ""
	I1028 18:30:44.295624   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.295632   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:44.295638   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:44.295684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:44.327278   67149 cri.go:89] found id: ""
	I1028 18:30:44.327302   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.327309   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:44.327315   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:44.327370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:44.360734   67149 cri.go:89] found id: ""
	I1028 18:30:44.360760   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.360768   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:44.360774   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:44.360822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:44.398198   67149 cri.go:89] found id: ""
	I1028 18:30:44.398224   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.398234   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:44.398249   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:44.398261   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:44.476135   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:44.476167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.514073   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:44.514105   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:44.563001   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:44.563033   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:44.576882   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:44.576912   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:44.648532   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:43.112043   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.113135   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.113382   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:44.403147   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:46.902890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.370854   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.371758   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.373946   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.149133   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:47.165612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:47.165696   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:47.203960   67149 cri.go:89] found id: ""
	I1028 18:30:47.203987   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.203996   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:47.204002   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:47.204065   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:47.236731   67149 cri.go:89] found id: ""
	I1028 18:30:47.236757   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.236766   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:47.236774   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:47.236828   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:47.273779   67149 cri.go:89] found id: ""
	I1028 18:30:47.273808   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.273820   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:47.273826   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:47.273878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:47.309996   67149 cri.go:89] found id: ""
	I1028 18:30:47.310020   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.310028   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:47.310034   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:47.310108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:47.352904   67149 cri.go:89] found id: ""
	I1028 18:30:47.352925   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.352934   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:47.352939   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:47.352990   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:47.389641   67149 cri.go:89] found id: ""
	I1028 18:30:47.389660   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.389667   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:47.389672   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:47.389718   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:47.422591   67149 cri.go:89] found id: ""
	I1028 18:30:47.422622   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.422632   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:47.422639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:47.422694   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:47.454849   67149 cri.go:89] found id: ""
	I1028 18:30:47.454876   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.454886   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:47.454895   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:47.454916   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:47.506176   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:47.506203   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:47.519084   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:47.519108   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:47.585660   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.585681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:47.585696   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:47.664904   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:47.664939   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:50.203775   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:50.216923   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:50.216992   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:50.252506   67149 cri.go:89] found id: ""
	I1028 18:30:50.252531   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.252541   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:50.252548   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:50.252608   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:50.288641   67149 cri.go:89] found id: ""
	I1028 18:30:50.288669   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.288678   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:50.288684   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:50.288739   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:50.322130   67149 cri.go:89] found id: ""
	I1028 18:30:50.322163   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.322174   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:50.322182   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:50.322240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:50.359508   67149 cri.go:89] found id: ""
	I1028 18:30:50.359536   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.359546   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:50.359554   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:50.359617   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:50.393571   67149 cri.go:89] found id: ""
	I1028 18:30:50.393607   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.393618   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:50.393626   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:50.393685   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:50.428683   67149 cri.go:89] found id: ""
	I1028 18:30:50.428705   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.428713   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:50.428719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:50.428767   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:50.464086   67149 cri.go:89] found id: ""
	I1028 18:30:50.464111   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.464119   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:50.464125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:50.464183   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:50.496695   67149 cri.go:89] found id: ""
	I1028 18:30:50.496726   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.496736   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:50.496745   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:50.496755   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:50.545495   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:50.545526   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:50.558819   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:50.558852   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:50.636344   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:50.636369   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:50.636384   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:50.720270   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:50.720304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:49.612927   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.613353   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.402779   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.901517   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.873490   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:54.372373   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.261194   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:53.274451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:53.274507   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:53.306258   67149 cri.go:89] found id: ""
	I1028 18:30:53.306286   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.306295   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:53.306301   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:53.306362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:53.340222   67149 cri.go:89] found id: ""
	I1028 18:30:53.340244   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.340253   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:53.340258   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:53.340322   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:53.377726   67149 cri.go:89] found id: ""
	I1028 18:30:53.377750   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.377760   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:53.377767   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:53.377820   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:53.414228   67149 cri.go:89] found id: ""
	I1028 18:30:53.414252   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.414262   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:53.414275   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:53.414332   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:53.449152   67149 cri.go:89] found id: ""
	I1028 18:30:53.449179   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.449186   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:53.449192   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:53.449237   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:53.485678   67149 cri.go:89] found id: ""
	I1028 18:30:53.485705   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.485716   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:53.485723   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:53.485784   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:53.520764   67149 cri.go:89] found id: ""
	I1028 18:30:53.520791   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.520802   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:53.520810   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:53.520870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:53.561153   67149 cri.go:89] found id: ""
	I1028 18:30:53.561176   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.561184   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:53.561192   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:53.561202   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:53.642192   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:53.642242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.686527   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:53.686567   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:53.740815   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:53.740849   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:53.754577   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:53.754604   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:53.823717   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:54.112985   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.612820   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.903128   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:55.903482   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.372798   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.871814   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.324847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:56.338572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:56.338628   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:56.375482   67149 cri.go:89] found id: ""
	I1028 18:30:56.375506   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.375517   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:56.375524   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:56.375580   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:56.407894   67149 cri.go:89] found id: ""
	I1028 18:30:56.407921   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.407931   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:56.407938   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:56.407993   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:56.447006   67149 cri.go:89] found id: ""
	I1028 18:30:56.447037   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.447048   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:56.447055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:56.447112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:56.483850   67149 cri.go:89] found id: ""
	I1028 18:30:56.483880   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.483890   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:56.483898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:56.483958   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:56.520008   67149 cri.go:89] found id: ""
	I1028 18:30:56.520038   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.520045   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:56.520051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:56.520099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:56.552567   67149 cri.go:89] found id: ""
	I1028 18:30:56.552592   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.552600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:56.552608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:56.552658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:56.591277   67149 cri.go:89] found id: ""
	I1028 18:30:56.591297   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.591305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:56.591311   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:56.591362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:56.632164   67149 cri.go:89] found id: ""
	I1028 18:30:56.632186   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.632194   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:56.632202   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:56.632219   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:56.683590   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:56.683623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:56.698509   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:56.698539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:56.777141   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.777171   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:56.777188   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:56.851801   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:56.851842   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.394266   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:59.408460   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:59.408545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:59.444066   67149 cri.go:89] found id: ""
	I1028 18:30:59.444092   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.444104   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:59.444112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:59.444165   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:59.479531   67149 cri.go:89] found id: ""
	I1028 18:30:59.479557   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.479568   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:59.479576   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:59.479622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:59.519467   67149 cri.go:89] found id: ""
	I1028 18:30:59.519489   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.519496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:59.519502   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:59.519546   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:59.551108   67149 cri.go:89] found id: ""
	I1028 18:30:59.551131   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.551140   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:59.551146   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:59.551197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:59.585875   67149 cri.go:89] found id: ""
	I1028 18:30:59.585899   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.585907   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:59.585912   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:59.585968   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:59.620571   67149 cri.go:89] found id: ""
	I1028 18:30:59.620595   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.620602   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:59.620608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:59.620655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:59.653927   67149 cri.go:89] found id: ""
	I1028 18:30:59.653954   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.653965   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:59.653972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:59.654039   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:59.689138   67149 cri.go:89] found id: ""
	I1028 18:30:59.689160   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.689168   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:59.689175   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:59.689185   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:59.768231   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:59.768270   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.811980   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:59.812007   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:59.864509   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:59.864543   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:59.879329   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:59.879354   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:59.950134   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:59.112280   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:01.113341   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.402845   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.902628   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.904642   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.872873   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:03.371672   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.450237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:02.464689   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:02.464765   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:02.500938   67149 cri.go:89] found id: ""
	I1028 18:31:02.500964   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.500975   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:02.500982   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:02.501043   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:02.534580   67149 cri.go:89] found id: ""
	I1028 18:31:02.534608   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.534620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:02.534628   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:02.534684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:02.570203   67149 cri.go:89] found id: ""
	I1028 18:31:02.570224   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.570231   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:02.570237   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:02.570284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:02.606037   67149 cri.go:89] found id: ""
	I1028 18:31:02.606064   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.606072   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:02.606082   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:02.606135   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:02.640622   67149 cri.go:89] found id: ""
	I1028 18:31:02.640646   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.640656   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:02.640663   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:02.640723   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:02.676406   67149 cri.go:89] found id: ""
	I1028 18:31:02.676434   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.676444   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:02.676451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:02.676520   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:02.710284   67149 cri.go:89] found id: ""
	I1028 18:31:02.710308   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.710316   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:02.710322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:02.710376   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:02.750853   67149 cri.go:89] found id: ""
	I1028 18:31:02.750899   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.750910   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:02.750918   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:02.750929   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:02.825886   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.825913   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:02.825927   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:02.904828   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:02.904857   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:02.941886   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:02.941922   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:02.991603   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:02.991632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.505655   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:05.520582   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:05.520638   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:05.558724   67149 cri.go:89] found id: ""
	I1028 18:31:05.558753   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.558763   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:05.558770   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:05.558816   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:05.597864   67149 cri.go:89] found id: ""
	I1028 18:31:05.597885   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.597893   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:05.597898   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:05.597956   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:05.643571   67149 cri.go:89] found id: ""
	I1028 18:31:05.643602   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.643613   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:05.643620   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:05.643679   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:05.682010   67149 cri.go:89] found id: ""
	I1028 18:31:05.682039   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.682048   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:05.682053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:05.682106   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:05.716043   67149 cri.go:89] found id: ""
	I1028 18:31:05.716067   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.716080   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:05.716086   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:05.716134   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:05.750962   67149 cri.go:89] found id: ""
	I1028 18:31:05.750995   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.751010   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:05.751016   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:05.751078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:05.785059   67149 cri.go:89] found id: ""
	I1028 18:31:05.785111   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.785124   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:05.785132   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:05.785193   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:05.833525   67149 cri.go:89] found id: ""
	I1028 18:31:05.833550   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.833559   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:05.833566   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:05.833579   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:05.887766   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:05.887796   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.902575   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:05.902606   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:05.975082   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:05.975108   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:05.975122   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:03.613265   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.114362   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.402167   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:07.402252   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.873147   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:08.370748   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.050369   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:06.050396   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.593506   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:08.606188   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:08.606251   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:08.645186   67149 cri.go:89] found id: ""
	I1028 18:31:08.645217   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.645227   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:08.645235   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:08.645294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:08.680728   67149 cri.go:89] found id: ""
	I1028 18:31:08.680759   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.680771   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:08.680778   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:08.680833   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:08.714733   67149 cri.go:89] found id: ""
	I1028 18:31:08.714760   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.714772   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:08.714779   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:08.714844   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:08.750293   67149 cri.go:89] found id: ""
	I1028 18:31:08.750323   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.750333   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:08.750339   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:08.750390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:08.784521   67149 cri.go:89] found id: ""
	I1028 18:31:08.784548   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.784559   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:08.784566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:08.784629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:08.818808   67149 cri.go:89] found id: ""
	I1028 18:31:08.818838   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.818849   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:08.818857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:08.818920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:08.855575   67149 cri.go:89] found id: ""
	I1028 18:31:08.855608   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.855619   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:08.855633   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:08.855690   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:08.892996   67149 cri.go:89] found id: ""
	I1028 18:31:08.893024   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.893035   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:08.893045   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:08.893064   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.937056   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:08.937084   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:08.989013   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:08.989048   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:09.002048   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:09.002077   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:09.075247   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:09.075277   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:09.075290   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:08.612396   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.612689   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:09.402595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.903403   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.371335   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:12.371435   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.371502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.654701   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:11.668066   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:11.668146   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:11.701666   67149 cri.go:89] found id: ""
	I1028 18:31:11.701693   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.701703   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:11.701710   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:11.701769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:11.738342   67149 cri.go:89] found id: ""
	I1028 18:31:11.738368   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.738376   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:11.738381   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:11.738428   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:11.772009   67149 cri.go:89] found id: ""
	I1028 18:31:11.772035   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.772045   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:11.772053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:11.772118   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:11.816210   67149 cri.go:89] found id: ""
	I1028 18:31:11.816237   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.816245   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:11.816251   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:11.816314   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:11.856675   67149 cri.go:89] found id: ""
	I1028 18:31:11.856704   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.856714   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:11.856722   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:11.856785   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:11.896566   67149 cri.go:89] found id: ""
	I1028 18:31:11.896592   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.896600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:11.896606   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:11.896665   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:11.932599   67149 cri.go:89] found id: ""
	I1028 18:31:11.932624   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.932633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:11.932640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:11.932704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:11.966952   67149 cri.go:89] found id: ""
	I1028 18:31:11.966982   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.967008   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:11.967019   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:11.967037   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:12.016465   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:12.016502   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:12.029314   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:12.029343   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:12.098906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:12.098936   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:12.098954   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:12.176440   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:12.176489   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:14.720173   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:14.733796   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:14.733848   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:14.774072   67149 cri.go:89] found id: ""
	I1028 18:31:14.774093   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.774100   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:14.774106   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:14.774152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:14.816116   67149 cri.go:89] found id: ""
	I1028 18:31:14.816145   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.816158   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:14.816166   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:14.816224   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:14.851167   67149 cri.go:89] found id: ""
	I1028 18:31:14.851189   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.851196   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:14.851202   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:14.851247   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:14.885887   67149 cri.go:89] found id: ""
	I1028 18:31:14.885918   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.885926   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:14.885931   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:14.885976   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:14.923787   67149 cri.go:89] found id: ""
	I1028 18:31:14.923815   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.923826   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:14.923833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:14.923892   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:14.960117   67149 cri.go:89] found id: ""
	I1028 18:31:14.960148   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.960160   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:14.960167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:14.960240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:14.998418   67149 cri.go:89] found id: ""
	I1028 18:31:14.998458   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.998470   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:14.998485   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:14.998545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:15.031985   67149 cri.go:89] found id: ""
	I1028 18:31:15.032005   67149 logs.go:282] 0 containers: []
	W1028 18:31:15.032014   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:15.032027   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:15.032038   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:15.045239   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:15.045264   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:15.118954   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:15.118978   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:15.118994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:15.200538   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:15.200569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:15.243581   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:15.243603   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:13.112232   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:15.113498   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.612946   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.401769   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.402729   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.871916   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.872378   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.794670   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:17.808325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:17.808380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:17.841888   67149 cri.go:89] found id: ""
	I1028 18:31:17.841911   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.841919   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:17.841925   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:17.841979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:17.881241   67149 cri.go:89] found id: ""
	I1028 18:31:17.881261   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.881269   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:17.881274   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:17.881331   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:17.922394   67149 cri.go:89] found id: ""
	I1028 18:31:17.922419   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.922428   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:17.922434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:17.922498   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:17.963519   67149 cri.go:89] found id: ""
	I1028 18:31:17.963546   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.963558   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:17.963566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:17.963641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:18.003181   67149 cri.go:89] found id: ""
	I1028 18:31:18.003202   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.003209   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:18.003214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:18.003261   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:18.040305   67149 cri.go:89] found id: ""
	I1028 18:31:18.040338   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.040348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:18.040356   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:18.040413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:18.077671   67149 cri.go:89] found id: ""
	I1028 18:31:18.077696   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.077708   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:18.077715   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:18.077777   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:18.116155   67149 cri.go:89] found id: ""
	I1028 18:31:18.116176   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.116182   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:18.116190   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:18.116201   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:18.168343   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:18.168370   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:18.181962   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:18.181988   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:18.260227   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:18.260251   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:18.260265   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:18.346588   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:18.346620   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:20.885832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:20.899053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:20.899121   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:20.954770   67149 cri.go:89] found id: ""
	I1028 18:31:20.954797   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.954806   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:20.954812   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:20.954870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:20.989809   67149 cri.go:89] found id: ""
	I1028 18:31:20.989834   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.989842   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:20.989848   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:20.989900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:21.027150   67149 cri.go:89] found id: ""
	I1028 18:31:21.027179   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.027191   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:21.027199   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:21.027259   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:20.113283   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:22.612710   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.902738   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.403607   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.371574   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.871000   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.061235   67149 cri.go:89] found id: ""
	I1028 18:31:21.061260   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.061270   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:21.061277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:21.061337   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:21.095451   67149 cri.go:89] found id: ""
	I1028 18:31:21.095473   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.095481   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:21.095487   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:21.095540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:21.135576   67149 cri.go:89] found id: ""
	I1028 18:31:21.135598   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.135606   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:21.135612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:21.135660   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:21.170816   67149 cri.go:89] found id: ""
	I1028 18:31:21.170845   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.170854   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:21.170860   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:21.170920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:21.204616   67149 cri.go:89] found id: ""
	I1028 18:31:21.204649   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.204660   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:21.204672   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:21.204686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:21.254523   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:21.254556   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:21.267981   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:21.268005   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:21.336786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:21.336813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:21.336828   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:21.420596   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:21.420625   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:23.962346   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:23.976628   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:23.976697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:24.016418   67149 cri.go:89] found id: ""
	I1028 18:31:24.016444   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.016453   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:24.016458   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:24.016533   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:24.051448   67149 cri.go:89] found id: ""
	I1028 18:31:24.051474   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.051483   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:24.051488   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:24.051554   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:24.090787   67149 cri.go:89] found id: ""
	I1028 18:31:24.090816   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.090829   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:24.090836   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:24.090900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:24.126315   67149 cri.go:89] found id: ""
	I1028 18:31:24.126342   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.126349   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:24.126355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:24.126402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:24.161340   67149 cri.go:89] found id: ""
	I1028 18:31:24.161367   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.161379   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:24.161387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:24.161445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:24.195991   67149 cri.go:89] found id: ""
	I1028 18:31:24.196017   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.196028   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:24.196036   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:24.196084   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:24.229789   67149 cri.go:89] found id: ""
	I1028 18:31:24.229822   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.229837   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:24.229845   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:24.229906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:24.264724   67149 cri.go:89] found id: ""
	I1028 18:31:24.264748   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.264757   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:24.264765   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:24.264775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:24.303551   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:24.303574   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:24.351693   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:24.351725   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:24.364537   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:24.364566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:24.436935   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:24.436955   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:24.436966   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:25.112870   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.612492   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.902008   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.902544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.902622   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.871089   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.871265   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:29.872201   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.014928   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:27.029540   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:27.029609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:27.064598   67149 cri.go:89] found id: ""
	I1028 18:31:27.064626   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.064636   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:27.064643   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:27.064704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:27.099432   67149 cri.go:89] found id: ""
	I1028 18:31:27.099455   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.099465   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:27.099472   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:27.099531   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:27.133961   67149 cri.go:89] found id: ""
	I1028 18:31:27.133996   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.134006   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:27.134012   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:27.134075   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:27.171976   67149 cri.go:89] found id: ""
	I1028 18:31:27.172003   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.172014   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:27.172022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:27.172092   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:27.205681   67149 cri.go:89] found id: ""
	I1028 18:31:27.205710   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.205721   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:27.205730   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:27.205793   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:27.244571   67149 cri.go:89] found id: ""
	I1028 18:31:27.244603   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.244612   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:27.244617   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:27.244661   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:27.281692   67149 cri.go:89] found id: ""
	I1028 18:31:27.281722   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.281738   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:27.281746   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:27.281800   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:27.335003   67149 cri.go:89] found id: ""
	I1028 18:31:27.335033   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.335041   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:27.335049   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:27.335066   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:27.353992   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:27.354017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:27.457103   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:27.457125   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:27.457136   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.537717   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:27.537746   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:27.579842   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:27.579870   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.133749   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:30.147518   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:30.147576   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:30.182683   67149 cri.go:89] found id: ""
	I1028 18:31:30.182711   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.182722   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:30.182729   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:30.182792   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:30.215088   67149 cri.go:89] found id: ""
	I1028 18:31:30.215109   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.215118   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:30.215124   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:30.215176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:30.250169   67149 cri.go:89] found id: ""
	I1028 18:31:30.250194   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.250202   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:30.250207   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:30.250284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:30.286028   67149 cri.go:89] found id: ""
	I1028 18:31:30.286055   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.286062   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:30.286069   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:30.286112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:30.320503   67149 cri.go:89] found id: ""
	I1028 18:31:30.320528   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.320539   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:30.320547   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:30.320604   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:30.352773   67149 cri.go:89] found id: ""
	I1028 18:31:30.352793   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.352800   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:30.352806   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:30.352859   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:30.385922   67149 cri.go:89] found id: ""
	I1028 18:31:30.385944   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.385951   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:30.385956   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:30.385999   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:30.421909   67149 cri.go:89] found id: ""
	I1028 18:31:30.421933   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.421945   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:30.421956   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:30.421971   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.470917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:30.470944   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:30.484033   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:30.484059   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:30.554810   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:30.554836   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:30.554850   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:30.634403   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:30.634432   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:30.113496   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.613397   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:30.402688   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.902277   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:31.872598   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:34.371198   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:33.182127   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:33.194994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:33.195063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:33.233076   67149 cri.go:89] found id: ""
	I1028 18:31:33.233098   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.233106   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:33.233112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:33.233160   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:33.266963   67149 cri.go:89] found id: ""
	I1028 18:31:33.266998   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.267021   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:33.267028   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:33.267083   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:33.305888   67149 cri.go:89] found id: ""
	I1028 18:31:33.305914   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.305922   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:33.305928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:33.305979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:33.339451   67149 cri.go:89] found id: ""
	I1028 18:31:33.339479   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.339489   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:33.339496   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:33.339555   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:33.375038   67149 cri.go:89] found id: ""
	I1028 18:31:33.375065   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.375073   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:33.375079   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:33.375125   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:33.409157   67149 cri.go:89] found id: ""
	I1028 18:31:33.409176   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.409183   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:33.409189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:33.409243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:33.449108   67149 cri.go:89] found id: ""
	I1028 18:31:33.449133   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.449149   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:33.449155   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:33.449227   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:33.491194   67149 cri.go:89] found id: ""
	I1028 18:31:33.491215   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.491224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:33.491232   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:33.491250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.530590   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:33.530618   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:33.581933   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:33.581962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:33.595387   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:33.595416   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:33.664855   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:33.664882   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:33.664899   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:35.113673   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.612606   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:35.401938   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.402270   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.372499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:38.372670   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.242724   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:36.256152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:36.256221   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:36.292452   67149 cri.go:89] found id: ""
	I1028 18:31:36.292494   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.292504   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:36.292511   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:36.292568   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:36.325210   67149 cri.go:89] found id: ""
	I1028 18:31:36.325231   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.325238   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:36.325244   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:36.325293   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:36.356738   67149 cri.go:89] found id: ""
	I1028 18:31:36.356757   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.356764   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:36.356769   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:36.356827   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:36.389678   67149 cri.go:89] found id: ""
	I1028 18:31:36.389704   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.389712   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:36.389717   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:36.389775   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:36.422956   67149 cri.go:89] found id: ""
	I1028 18:31:36.422989   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.422998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:36.423005   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:36.423061   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:36.456877   67149 cri.go:89] found id: ""
	I1028 18:31:36.456904   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.456914   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:36.456921   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:36.456983   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:36.489728   67149 cri.go:89] found id: ""
	I1028 18:31:36.489758   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.489766   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:36.489772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:36.489829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:36.524307   67149 cri.go:89] found id: ""
	I1028 18:31:36.524338   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.524350   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:36.524360   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:36.524372   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:36.574771   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:36.574800   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:36.587485   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:36.587506   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:36.655922   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:36.655949   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:36.655962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.738312   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:36.738352   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.279425   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:39.293108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:39.293167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:39.325542   67149 cri.go:89] found id: ""
	I1028 18:31:39.325573   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.325584   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:39.325592   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:39.325656   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:39.357581   67149 cri.go:89] found id: ""
	I1028 18:31:39.357609   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.357620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:39.357627   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:39.357681   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:39.394833   67149 cri.go:89] found id: ""
	I1028 18:31:39.394853   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.394860   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:39.394866   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:39.394916   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:39.430151   67149 cri.go:89] found id: ""
	I1028 18:31:39.430178   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.430188   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:39.430196   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:39.430254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:39.468060   67149 cri.go:89] found id: ""
	I1028 18:31:39.468089   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.468100   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:39.468108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:39.468181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:39.503702   67149 cri.go:89] found id: ""
	I1028 18:31:39.503734   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.503752   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:39.503761   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:39.503829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:39.536193   67149 cri.go:89] found id: ""
	I1028 18:31:39.536221   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.536233   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:39.536240   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:39.536305   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:39.570194   67149 cri.go:89] found id: ""
	I1028 18:31:39.570215   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.570224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:39.570232   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:39.570245   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:39.647179   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:39.647207   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:39.647220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:39.725980   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:39.726012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.765671   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:39.765704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:39.818257   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:39.818289   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:39.614055   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.112561   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:39.902061   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.402314   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:40.871483   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.872270   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.332335   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:42.344964   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:42.345031   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:42.380904   67149 cri.go:89] found id: ""
	I1028 18:31:42.380926   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.380933   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:42.380938   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:42.380982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:42.414361   67149 cri.go:89] found id: ""
	I1028 18:31:42.414385   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.414393   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:42.414399   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:42.414443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:42.447931   67149 cri.go:89] found id: ""
	I1028 18:31:42.447957   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.447968   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:42.447975   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:42.448024   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:42.483262   67149 cri.go:89] found id: ""
	I1028 18:31:42.483283   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.483296   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:42.483301   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:42.483365   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:42.516665   67149 cri.go:89] found id: ""
	I1028 18:31:42.516693   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.516702   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:42.516709   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:42.516776   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:42.550160   67149 cri.go:89] found id: ""
	I1028 18:31:42.550181   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.550188   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:42.550193   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:42.550238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:42.583509   67149 cri.go:89] found id: ""
	I1028 18:31:42.583535   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.583546   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:42.583552   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:42.583611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:42.619276   67149 cri.go:89] found id: ""
	I1028 18:31:42.619312   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.619320   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:42.619328   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:42.619338   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:42.692442   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:42.692487   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:42.731768   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:42.731798   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:42.783997   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:42.784043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.797809   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:42.797834   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:42.863351   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.363648   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:45.376277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:45.376341   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:45.415231   67149 cri.go:89] found id: ""
	I1028 18:31:45.415255   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.415265   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:45.415273   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:45.415330   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:45.451133   67149 cri.go:89] found id: ""
	I1028 18:31:45.451157   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.451164   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:45.451170   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:45.451228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:45.483526   67149 cri.go:89] found id: ""
	I1028 18:31:45.483552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.483562   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:45.483567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:45.483621   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:45.515799   67149 cri.go:89] found id: ""
	I1028 18:31:45.515828   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.515838   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:45.515846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:45.515906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:45.548043   67149 cri.go:89] found id: ""
	I1028 18:31:45.548071   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.548082   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:45.548090   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:45.548153   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:45.581525   67149 cri.go:89] found id: ""
	I1028 18:31:45.581552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.581563   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:45.581570   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:45.581629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:45.622258   67149 cri.go:89] found id: ""
	I1028 18:31:45.622282   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.622290   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:45.622296   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:45.622353   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:45.661255   67149 cri.go:89] found id: ""
	I1028 18:31:45.661275   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.661284   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:45.661292   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:45.661304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:45.675209   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:45.675242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:45.737546   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.737573   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:45.737592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:45.816012   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:45.816043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:45.854135   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:45.854167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:44.612155   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.612875   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:44.402557   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.902339   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:45.371918   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:47.872710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.875644   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:48.406233   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:48.418950   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:48.419001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:48.452933   67149 cri.go:89] found id: ""
	I1028 18:31:48.452952   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.452961   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:48.452975   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:48.453034   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:48.489604   67149 cri.go:89] found id: ""
	I1028 18:31:48.489630   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.489640   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:48.489648   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:48.489706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:48.525463   67149 cri.go:89] found id: ""
	I1028 18:31:48.525493   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.525504   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:48.525511   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:48.525566   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:48.559266   67149 cri.go:89] found id: ""
	I1028 18:31:48.559294   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.559302   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:48.559308   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:48.559363   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:48.592670   67149 cri.go:89] found id: ""
	I1028 18:31:48.592695   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.592706   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:48.592714   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:48.592769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:48.627175   67149 cri.go:89] found id: ""
	I1028 18:31:48.627196   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.627205   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:48.627213   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:48.627260   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:48.661864   67149 cri.go:89] found id: ""
	I1028 18:31:48.661887   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.661895   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:48.661901   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:48.661946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:48.696731   67149 cri.go:89] found id: ""
	I1028 18:31:48.696756   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.696765   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:48.696775   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:48.696788   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.745390   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:48.745417   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:48.759218   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:48.759241   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:48.830299   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:48.830331   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:48.830349   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:48.909934   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:48.909963   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:49.112884   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.613217   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.402707   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.903103   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:52.373283   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.872603   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.451597   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:51.464889   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:51.464943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:51.499962   67149 cri.go:89] found id: ""
	I1028 18:31:51.499990   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.500001   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:51.500010   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:51.500069   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:51.532341   67149 cri.go:89] found id: ""
	I1028 18:31:51.532370   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.532380   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:51.532388   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:51.532443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:51.565531   67149 cri.go:89] found id: ""
	I1028 18:31:51.565554   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.565561   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:51.565567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:51.565614   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:51.602859   67149 cri.go:89] found id: ""
	I1028 18:31:51.602882   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.602894   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:51.602899   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:51.602943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:51.639896   67149 cri.go:89] found id: ""
	I1028 18:31:51.639915   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.639922   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:51.639928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:51.639972   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:51.675728   67149 cri.go:89] found id: ""
	I1028 18:31:51.675755   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.675762   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:51.675768   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:51.675825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:51.710285   67149 cri.go:89] found id: ""
	I1028 18:31:51.710312   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.710320   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:51.710326   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:51.710374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:51.744527   67149 cri.go:89] found id: ""
	I1028 18:31:51.744551   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.744560   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:51.744570   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:51.744584   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.780580   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:51.780614   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:51.832979   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:51.833008   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:51.846389   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:51.846415   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:51.918177   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:51.918196   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:51.918210   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.493806   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:54.506468   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:54.506526   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:54.540500   67149 cri.go:89] found id: ""
	I1028 18:31:54.540527   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.540537   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:54.540544   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:54.540601   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:54.573399   67149 cri.go:89] found id: ""
	I1028 18:31:54.573428   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.573438   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:54.573448   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:54.573509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:54.606227   67149 cri.go:89] found id: ""
	I1028 18:31:54.606262   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.606272   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:54.606278   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:54.606338   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:54.641143   67149 cri.go:89] found id: ""
	I1028 18:31:54.641163   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.641172   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:54.641179   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:54.641238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:54.674269   67149 cri.go:89] found id: ""
	I1028 18:31:54.674292   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.674300   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:54.674306   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:54.674352   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:54.707160   67149 cri.go:89] found id: ""
	I1028 18:31:54.707183   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.707191   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:54.707197   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:54.707242   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:54.746522   67149 cri.go:89] found id: ""
	I1028 18:31:54.746544   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.746552   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:54.746558   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:54.746613   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:54.779315   67149 cri.go:89] found id: ""
	I1028 18:31:54.779341   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.779348   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:54.779356   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:54.779367   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:54.830987   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:54.831017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:54.844846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:54.844871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:54.913540   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:54.913558   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:54.913568   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.994220   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:54.994250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:54.112785   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.114029   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.401657   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.402726   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.371756   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:59.372308   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.532820   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:57.545394   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:57.545454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:57.582329   67149 cri.go:89] found id: ""
	I1028 18:31:57.582355   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.582365   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:57.582372   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:57.582438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:57.616082   67149 cri.go:89] found id: ""
	I1028 18:31:57.616107   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.616115   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:57.616123   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:57.616167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:57.650118   67149 cri.go:89] found id: ""
	I1028 18:31:57.650144   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.650153   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:57.650162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:57.650215   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:57.684801   67149 cri.go:89] found id: ""
	I1028 18:31:57.684823   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.684831   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:57.684839   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:57.684887   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:57.722396   67149 cri.go:89] found id: ""
	I1028 18:31:57.722423   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.722431   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:57.722437   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:57.722516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:57.759779   67149 cri.go:89] found id: ""
	I1028 18:31:57.759802   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.759809   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:57.759818   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:57.759861   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:57.793977   67149 cri.go:89] found id: ""
	I1028 18:31:57.794034   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.794045   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:57.794053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:57.794117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:57.831104   67149 cri.go:89] found id: ""
	I1028 18:31:57.831130   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.831140   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:57.831151   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:57.831164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:57.920155   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:57.920174   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:57.920184   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:57.999677   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:57.999709   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:58.036647   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:58.036673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:58.088299   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:58.088333   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.601832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:00.615434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:00.615491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:00.653344   67149 cri.go:89] found id: ""
	I1028 18:32:00.653372   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.653383   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:00.653390   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:00.653450   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:00.693086   67149 cri.go:89] found id: ""
	I1028 18:32:00.693111   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.693122   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:00.693130   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:00.693188   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:00.728129   67149 cri.go:89] found id: ""
	I1028 18:32:00.728157   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.728167   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:00.728181   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:00.728243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:00.760540   67149 cri.go:89] found id: ""
	I1028 18:32:00.760568   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.760579   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:00.760586   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:00.760654   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:00.796633   67149 cri.go:89] found id: ""
	I1028 18:32:00.796662   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.796672   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:00.796680   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:00.796740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:00.829924   67149 cri.go:89] found id: ""
	I1028 18:32:00.829954   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.829966   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:00.829974   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:00.830028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:00.861565   67149 cri.go:89] found id: ""
	I1028 18:32:00.861586   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.861593   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:00.861599   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:00.861655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:00.894129   67149 cri.go:89] found id: ""
	I1028 18:32:00.894154   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.894162   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:00.894169   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:00.894180   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.908303   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:00.908331   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:00.974521   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:00.974543   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:00.974557   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:58.612554   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.612655   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:58.901908   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.902851   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.872423   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.873235   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.048113   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:01.048140   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:01.086657   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:01.086731   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.639781   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:03.652239   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:03.652291   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:03.687098   67149 cri.go:89] found id: ""
	I1028 18:32:03.687120   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.687129   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:03.687135   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:03.687181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:03.722176   67149 cri.go:89] found id: ""
	I1028 18:32:03.722206   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.722217   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:03.722225   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:03.722282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:03.757489   67149 cri.go:89] found id: ""
	I1028 18:32:03.757512   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.757520   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:03.757526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:03.757571   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:03.795359   67149 cri.go:89] found id: ""
	I1028 18:32:03.795400   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.795411   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:03.795429   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:03.795489   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:03.830919   67149 cri.go:89] found id: ""
	I1028 18:32:03.830945   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.830953   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:03.830958   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:03.831008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:03.863396   67149 cri.go:89] found id: ""
	I1028 18:32:03.863425   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.863437   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:03.863445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:03.863516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:03.897085   67149 cri.go:89] found id: ""
	I1028 18:32:03.897112   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.897121   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:03.897128   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:03.897189   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:03.929439   67149 cri.go:89] found id: ""
	I1028 18:32:03.929467   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.929478   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:03.929487   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:03.929503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.982917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:03.982943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:03.996333   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:03.996355   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:04.062786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:04.062813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:04.062827   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:04.143988   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:04.144016   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:03.113499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.612544   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.620294   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.402246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.402730   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.904429   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.373120   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:08.871662   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.683977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:06.696605   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:06.696680   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:06.733031   67149 cri.go:89] found id: ""
	I1028 18:32:06.733060   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.733070   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:06.733078   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:06.733138   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:06.769196   67149 cri.go:89] found id: ""
	I1028 18:32:06.769218   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.769225   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:06.769231   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:06.769280   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:06.806938   67149 cri.go:89] found id: ""
	I1028 18:32:06.806959   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.806966   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:06.806972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:06.807017   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:06.839506   67149 cri.go:89] found id: ""
	I1028 18:32:06.839528   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.839537   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:06.839542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:06.839587   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:06.878275   67149 cri.go:89] found id: ""
	I1028 18:32:06.878300   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.878309   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:06.878317   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:06.878382   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:06.916336   67149 cri.go:89] found id: ""
	I1028 18:32:06.916366   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.916374   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:06.916381   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:06.916434   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:06.971413   67149 cri.go:89] found id: ""
	I1028 18:32:06.971435   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.971443   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:06.971449   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:06.971494   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:07.004432   67149 cri.go:89] found id: ""
	I1028 18:32:07.004464   67149 logs.go:282] 0 containers: []
	W1028 18:32:07.004485   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:07.004496   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:07.004509   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:07.081741   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:07.081780   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:07.122022   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:07.122053   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:07.169470   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:07.169496   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:07.183433   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:07.183459   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:07.251765   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:09.752773   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:09.766042   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:09.766119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:09.802881   67149 cri.go:89] found id: ""
	I1028 18:32:09.802911   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.802923   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:09.802930   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:09.802987   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:09.840269   67149 cri.go:89] found id: ""
	I1028 18:32:09.840292   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.840300   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:09.840305   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:09.840370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:09.874654   67149 cri.go:89] found id: ""
	I1028 18:32:09.874679   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.874689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:09.874696   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:09.874752   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:09.910328   67149 cri.go:89] found id: ""
	I1028 18:32:09.910350   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.910358   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:09.910365   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:09.910425   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:09.942717   67149 cri.go:89] found id: ""
	I1028 18:32:09.942744   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.942752   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:09.942757   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:09.942814   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:09.975644   67149 cri.go:89] found id: ""
	I1028 18:32:09.975674   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.975685   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:09.975692   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:09.975750   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:10.008257   67149 cri.go:89] found id: ""
	I1028 18:32:10.008294   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.008305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:10.008313   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:10.008373   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:10.041678   67149 cri.go:89] found id: ""
	I1028 18:32:10.041705   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.041716   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:10.041726   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:10.041739   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:10.090474   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:10.090503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:10.103846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:10.103874   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:10.172819   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:10.172847   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:10.172862   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:10.251927   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:10.251955   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:10.112553   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.113090   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:10.401890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.902888   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:11.371860   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:13.373112   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.795985   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:12.810859   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:12.810921   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:12.849897   67149 cri.go:89] found id: ""
	I1028 18:32:12.849925   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.849934   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:12.849940   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:12.850003   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:12.883007   67149 cri.go:89] found id: ""
	I1028 18:32:12.883034   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.883045   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:12.883052   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:12.883111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:12.917458   67149 cri.go:89] found id: ""
	I1028 18:32:12.917485   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.917496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:12.917503   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:12.917561   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:12.950531   67149 cri.go:89] found id: ""
	I1028 18:32:12.950558   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.950568   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:12.950576   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:12.950631   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:12.983902   67149 cri.go:89] found id: ""
	I1028 18:32:12.983929   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.983937   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:12.983943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:12.983986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:13.017486   67149 cri.go:89] found id: ""
	I1028 18:32:13.017513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.017521   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:13.017526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:13.017582   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:13.050553   67149 cri.go:89] found id: ""
	I1028 18:32:13.050582   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.050594   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:13.050601   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:13.050658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:13.083489   67149 cri.go:89] found id: ""
	I1028 18:32:13.083513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.083520   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:13.083528   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:13.083537   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:13.137451   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:13.137482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:13.153154   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:13.153179   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:13.221043   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:13.221066   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:13.221080   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:13.299930   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:13.299960   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:15.850484   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:15.862930   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:15.862982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:15.895625   67149 cri.go:89] found id: ""
	I1028 18:32:15.895643   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.895651   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:15.895657   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:15.895701   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:15.928073   67149 cri.go:89] found id: ""
	I1028 18:32:15.928103   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.928113   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:15.928120   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:15.928180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:15.962261   67149 cri.go:89] found id: ""
	I1028 18:32:15.962282   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.962290   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:15.962295   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:15.962342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:15.999177   67149 cri.go:89] found id: ""
	I1028 18:32:15.999206   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.999216   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:15.999224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:15.999282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:16.033098   67149 cri.go:89] found id: ""
	I1028 18:32:16.033126   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.033138   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:16.033145   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:16.033208   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:14.612739   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.112266   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.401576   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.401773   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:18.372059   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:16.067049   67149 cri.go:89] found id: ""
	I1028 18:32:16.067071   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.067083   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:16.067089   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:16.067145   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:16.106936   67149 cri.go:89] found id: ""
	I1028 18:32:16.106970   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.106981   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:16.106988   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:16.107044   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:16.141702   67149 cri.go:89] found id: ""
	I1028 18:32:16.141729   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.141741   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:16.141751   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:16.141762   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:16.178772   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:16.178803   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:16.230851   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:16.230878   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:16.244489   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:16.244514   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:16.319362   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:16.319389   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:16.319405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:18.899694   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:18.913287   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:18.913358   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:18.954136   67149 cri.go:89] found id: ""
	I1028 18:32:18.954158   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.954165   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:18.954170   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:18.954218   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:18.987427   67149 cri.go:89] found id: ""
	I1028 18:32:18.987449   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.987457   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:18.987462   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:18.987505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:19.022067   67149 cri.go:89] found id: ""
	I1028 18:32:19.022099   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.022110   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:19.022118   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:19.022167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:19.054533   67149 cri.go:89] found id: ""
	I1028 18:32:19.054560   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.054570   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:19.054578   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:19.054644   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:19.099324   67149 cri.go:89] found id: ""
	I1028 18:32:19.099356   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.099367   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:19.099375   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:19.099436   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:19.146437   67149 cri.go:89] found id: ""
	I1028 18:32:19.146463   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.146470   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:19.146478   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:19.146540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:19.192027   67149 cri.go:89] found id: ""
	I1028 18:32:19.192053   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.192070   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:19.192078   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:19.192140   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:19.228411   67149 cri.go:89] found id: ""
	I1028 18:32:19.228437   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.228447   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:19.228457   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:19.228480   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:19.313151   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:19.313183   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:19.352117   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:19.352142   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:19.402772   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:19.402805   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:19.416148   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:19.416167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:19.483098   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:19.112720   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.611924   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:19.403635   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.902116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:20.872280   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:22.872726   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.983420   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:21.997129   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:21.997180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:22.035600   67149 cri.go:89] found id: ""
	I1028 18:32:22.035622   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.035631   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:22.035637   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:22.035684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:22.073413   67149 cri.go:89] found id: ""
	I1028 18:32:22.073440   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.073450   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:22.073458   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:22.073505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:22.108637   67149 cri.go:89] found id: ""
	I1028 18:32:22.108663   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.108673   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:22.108682   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:22.108740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:22.145837   67149 cri.go:89] found id: ""
	I1028 18:32:22.145860   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.145867   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:22.145873   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:22.145928   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:22.183830   67149 cri.go:89] found id: ""
	I1028 18:32:22.183855   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.183864   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:22.183869   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:22.183917   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:22.221402   67149 cri.go:89] found id: ""
	I1028 18:32:22.221423   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.221430   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:22.221436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:22.221484   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:22.262193   67149 cri.go:89] found id: ""
	I1028 18:32:22.262220   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.262229   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:22.262234   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:22.262297   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:22.298774   67149 cri.go:89] found id: ""
	I1028 18:32:22.298797   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.298808   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:22.298819   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:22.298831   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:22.348677   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:22.348716   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:22.362199   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:22.362220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:22.429304   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:22.429327   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:22.429345   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:22.511591   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:22.511623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.049119   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:25.063910   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:25.063970   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:25.099795   67149 cri.go:89] found id: ""
	I1028 18:32:25.099822   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.099833   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:25.099840   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:25.099898   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:25.137957   67149 cri.go:89] found id: ""
	I1028 18:32:25.137985   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.137995   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:25.138002   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:25.138063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:25.174687   67149 cri.go:89] found id: ""
	I1028 18:32:25.174715   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.174726   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:25.174733   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:25.174795   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:25.207039   67149 cri.go:89] found id: ""
	I1028 18:32:25.207067   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.207077   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:25.207084   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:25.207130   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:25.239961   67149 cri.go:89] found id: ""
	I1028 18:32:25.239990   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.239998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:25.240004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:25.240055   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:25.273823   67149 cri.go:89] found id: ""
	I1028 18:32:25.273848   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.273858   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:25.273865   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:25.273925   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:25.310725   67149 cri.go:89] found id: ""
	I1028 18:32:25.310754   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.310765   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:25.310772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:25.310830   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:25.348724   67149 cri.go:89] found id: ""
	I1028 18:32:25.348749   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.348760   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:25.348770   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:25.348784   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:25.430213   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:25.430243   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.472233   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:25.472263   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:25.525648   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:25.525676   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:25.538697   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:25.538721   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:25.606779   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:23.612901   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.112494   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:23.902733   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.402271   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:25.372428   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:27.870461   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:29.871824   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.107877   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:28.122241   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:28.122296   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:28.157042   67149 cri.go:89] found id: ""
	I1028 18:32:28.157070   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.157082   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:28.157089   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:28.157142   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:28.190625   67149 cri.go:89] found id: ""
	I1028 18:32:28.190648   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.190658   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:28.190666   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:28.190724   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:28.224528   67149 cri.go:89] found id: ""
	I1028 18:32:28.224551   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.224559   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:28.224565   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:28.224609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:28.265073   67149 cri.go:89] found id: ""
	I1028 18:32:28.265100   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.265110   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:28.265116   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:28.265174   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:28.302598   67149 cri.go:89] found id: ""
	I1028 18:32:28.302623   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.302633   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:28.302640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:28.302697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:28.339757   67149 cri.go:89] found id: ""
	I1028 18:32:28.339781   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.339789   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:28.339794   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:28.339846   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:28.375185   67149 cri.go:89] found id: ""
	I1028 18:32:28.375213   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.375224   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:28.375231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:28.375294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:28.413292   67149 cri.go:89] found id: ""
	I1028 18:32:28.413316   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.413334   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:28.413344   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:28.413376   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:28.464069   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:28.464098   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:28.478275   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:28.478299   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:28.546483   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.546504   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:28.546515   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:28.623015   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:28.623041   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:28.613303   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.111518   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.403789   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:30.903113   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:32.371951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:34.372820   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.161570   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:31.175056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:31.175119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:31.210163   67149 cri.go:89] found id: ""
	I1028 18:32:31.210187   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.210199   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:31.210207   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:31.210264   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:31.244605   67149 cri.go:89] found id: ""
	I1028 18:32:31.244630   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.244637   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:31.244643   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:31.244688   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:31.280793   67149 cri.go:89] found id: ""
	I1028 18:32:31.280818   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.280827   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:31.280833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:31.280890   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:31.314616   67149 cri.go:89] found id: ""
	I1028 18:32:31.314641   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.314649   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:31.314654   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:31.314709   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:31.349386   67149 cri.go:89] found id: ""
	I1028 18:32:31.349410   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.349417   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:31.349423   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:31.349469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:31.382831   67149 cri.go:89] found id: ""
	I1028 18:32:31.382861   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.382871   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:31.382879   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:31.382924   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:31.417365   67149 cri.go:89] found id: ""
	I1028 18:32:31.417391   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.417400   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:31.417410   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:31.417469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:31.450631   67149 cri.go:89] found id: ""
	I1028 18:32:31.450660   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.450672   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:31.450683   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:31.450697   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.488932   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:31.488959   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:31.539335   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:31.539361   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:31.552304   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:31.552328   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:31.629291   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:31.629308   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:31.629323   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.207517   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:34.221231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:34.221310   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:34.255342   67149 cri.go:89] found id: ""
	I1028 18:32:34.255365   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.255373   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:34.255379   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:34.255438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:34.303802   67149 cri.go:89] found id: ""
	I1028 18:32:34.303827   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.303836   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:34.303843   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:34.303896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:34.339531   67149 cri.go:89] found id: ""
	I1028 18:32:34.339568   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.339579   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:34.339589   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:34.339653   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:34.374063   67149 cri.go:89] found id: ""
	I1028 18:32:34.374084   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.374094   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:34.374102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:34.374155   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:34.410880   67149 cri.go:89] found id: ""
	I1028 18:32:34.410909   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.410918   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:34.410924   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:34.410971   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:34.445372   67149 cri.go:89] found id: ""
	I1028 18:32:34.445397   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.445408   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:34.445416   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:34.445474   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:34.477820   67149 cri.go:89] found id: ""
	I1028 18:32:34.477844   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.477851   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:34.477857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:34.477909   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:34.517581   67149 cri.go:89] found id: ""
	I1028 18:32:34.517602   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.517609   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:34.517618   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:34.517632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:34.530407   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:34.530430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:34.599055   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:34.599083   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:34.599096   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.681579   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:34.681612   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:34.720523   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:34.720550   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:33.111858   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.112216   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.613521   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:33.401782   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.402544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.901848   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:36.871451   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.372642   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.272697   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:37.289091   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:37.289159   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:37.321600   67149 cri.go:89] found id: ""
	I1028 18:32:37.321628   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.321639   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:37.321647   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:37.321704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:37.353296   67149 cri.go:89] found id: ""
	I1028 18:32:37.353324   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.353337   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:37.353343   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:37.353400   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:37.386299   67149 cri.go:89] found id: ""
	I1028 18:32:37.386321   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.386328   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:37.386333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:37.386401   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:37.420992   67149 cri.go:89] found id: ""
	I1028 18:32:37.421026   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.421039   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:37.421047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:37.421117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:37.456174   67149 cri.go:89] found id: ""
	I1028 18:32:37.456206   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.456217   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:37.456224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:37.456284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:37.491796   67149 cri.go:89] found id: ""
	I1028 18:32:37.491819   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.491827   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:37.491833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:37.491878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:37.529002   67149 cri.go:89] found id: ""
	I1028 18:32:37.529028   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.529039   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:37.529047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:37.529111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:37.568967   67149 cri.go:89] found id: ""
	I1028 18:32:37.568993   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.569001   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:37.569010   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:37.569022   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:37.640041   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:37.640065   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:37.640076   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:37.725490   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:37.725524   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:37.771858   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:37.771879   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.821240   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:37.821271   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.334946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:40.349147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:40.349216   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:40.383931   67149 cri.go:89] found id: ""
	I1028 18:32:40.383956   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.383966   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:40.383973   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:40.384028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:40.419877   67149 cri.go:89] found id: ""
	I1028 18:32:40.419905   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.419915   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:40.419922   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:40.419978   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:40.453659   67149 cri.go:89] found id: ""
	I1028 18:32:40.453681   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.453689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:40.453695   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:40.453744   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:40.486299   67149 cri.go:89] found id: ""
	I1028 18:32:40.486326   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.486343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:40.486350   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:40.486407   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:40.518309   67149 cri.go:89] found id: ""
	I1028 18:32:40.518334   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.518344   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:40.518351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:40.518402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:40.549008   67149 cri.go:89] found id: ""
	I1028 18:32:40.549040   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.549049   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:40.549055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:40.549108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:40.586157   67149 cri.go:89] found id: ""
	I1028 18:32:40.586177   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.586184   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:40.586189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:40.586232   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:40.621107   67149 cri.go:89] found id: ""
	I1028 18:32:40.621133   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.621144   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:40.621153   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:40.621164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.633793   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:40.633816   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:40.700370   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:40.700393   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:40.700405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:40.780964   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:40.780993   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:40.819904   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:40.819928   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:40.112755   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:42.113116   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.903476   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.904639   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.872360   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.371399   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:43.371487   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:43.384387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:43.384445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:43.419889   67149 cri.go:89] found id: ""
	I1028 18:32:43.419922   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.419931   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:43.419937   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:43.419997   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:43.455177   67149 cri.go:89] found id: ""
	I1028 18:32:43.455209   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.455219   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:43.455227   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:43.455295   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:43.493070   67149 cri.go:89] found id: ""
	I1028 18:32:43.493094   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.493104   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:43.493111   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:43.493170   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:43.526164   67149 cri.go:89] found id: ""
	I1028 18:32:43.526191   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.526199   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:43.526205   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:43.526254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:43.559225   67149 cri.go:89] found id: ""
	I1028 18:32:43.559252   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.559263   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:43.559270   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:43.559323   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:43.597178   67149 cri.go:89] found id: ""
	I1028 18:32:43.597198   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.597206   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:43.597212   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:43.597276   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:43.633179   67149 cri.go:89] found id: ""
	I1028 18:32:43.633200   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.633209   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:43.633214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:43.633290   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:43.669567   67149 cri.go:89] found id: ""
	I1028 18:32:43.669596   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.669605   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:43.669615   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:43.669631   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:43.737618   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:43.737638   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:43.737650   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:43.821394   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:43.821425   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:43.859924   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:43.859950   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.913539   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:43.913566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:44.611539   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.613781   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.401399   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.401930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.371445   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.372075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.429021   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:46.443137   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:46.443197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:46.480363   67149 cri.go:89] found id: ""
	I1028 18:32:46.480385   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.480394   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:46.480400   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:46.480452   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:46.514702   67149 cri.go:89] found id: ""
	I1028 18:32:46.514731   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.514738   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:46.514744   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:46.514796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:46.546829   67149 cri.go:89] found id: ""
	I1028 18:32:46.546857   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.546868   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:46.546874   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:46.546920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:46.580372   67149 cri.go:89] found id: ""
	I1028 18:32:46.580398   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.580407   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:46.580415   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:46.580491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:46.615455   67149 cri.go:89] found id: ""
	I1028 18:32:46.615479   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.615489   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:46.615497   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:46.615556   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:46.649547   67149 cri.go:89] found id: ""
	I1028 18:32:46.649570   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.649577   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:46.649583   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:46.649641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:46.684744   67149 cri.go:89] found id: ""
	I1028 18:32:46.684768   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.684779   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:46.684787   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:46.684852   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:46.725530   67149 cri.go:89] found id: ""
	I1028 18:32:46.725558   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.725569   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:46.725578   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:46.725592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:46.794487   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:46.794506   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:46.794517   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:46.881407   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:46.881438   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:46.921649   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:46.921671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:46.972915   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:46.972947   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.486835   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:49.501445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:49.501509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:49.537356   67149 cri.go:89] found id: ""
	I1028 18:32:49.537377   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.537384   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:49.537389   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:49.537443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:49.568514   67149 cri.go:89] found id: ""
	I1028 18:32:49.568541   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.568549   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:49.568555   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:49.568610   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:49.602300   67149 cri.go:89] found id: ""
	I1028 18:32:49.602324   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.602333   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:49.602342   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:49.602390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:49.640326   67149 cri.go:89] found id: ""
	I1028 18:32:49.640356   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.640366   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:49.640376   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:49.640437   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:49.675145   67149 cri.go:89] found id: ""
	I1028 18:32:49.675175   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.675183   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:49.675189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:49.675235   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:49.711104   67149 cri.go:89] found id: ""
	I1028 18:32:49.711129   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.711139   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:49.711147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:49.711206   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:49.748316   67149 cri.go:89] found id: ""
	I1028 18:32:49.748366   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.748378   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:49.748385   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:49.748441   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:49.781620   67149 cri.go:89] found id: ""
	I1028 18:32:49.781646   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.781656   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:49.781665   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:49.781679   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.795119   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:49.795143   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:49.870438   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:49.870519   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:49.870539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:49.956845   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:49.956875   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:49.993067   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:49.993097   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:49.112102   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:51.612691   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.901950   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.902354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.903627   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.871412   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.871499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:54.874588   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.543260   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:52.556524   67149 kubeadm.go:597] duration metric: took 4m2.404527005s to restartPrimaryControlPlane
	W1028 18:32:52.556602   67149 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:52.556639   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:32:53.011065   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:32:53.026226   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:32:53.035868   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:32:53.045257   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:32:53.045271   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:32:53.045302   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:32:53.054383   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:32:53.054430   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:32:53.063665   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:32:53.073006   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:32:53.073054   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:32:53.083156   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.092700   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:32:53.092742   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.102374   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:32:53.112072   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:32:53.112121   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:32:53.122102   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:32:53.347625   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:32:53.613118   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:56.111841   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:55.402354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.902406   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.371909   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:59.872630   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.112962   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:00.613499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.896006   66801 pod_ready.go:82] duration metric: took 4m0.00005957s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	E1028 18:32:58.896033   66801 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:32:58.896052   66801 pod_ready.go:39] duration metric: took 4m13.055181811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:32:58.896092   66801 kubeadm.go:597] duration metric: took 4m21.540757653s to restartPrimaryControlPlane
	W1028 18:32:58.896147   66801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:58.896173   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:02.372443   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:04.871981   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:03.113038   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:05.114488   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:07.612365   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:06.872705   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.371018   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.612856   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:12.114228   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:11.371831   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:13.372636   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:14.613213   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.113328   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:15.871907   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.872203   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:19.612892   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:21.613052   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:20.370964   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:22.371880   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:24.372718   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:25.039296   66801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.14309835s)
	I1028 18:33:25.039378   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:25.056172   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:25.066775   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:25.077717   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:25.077734   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:25.077770   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:33:25.086924   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:25.086968   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:25.096867   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:33:25.106162   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:25.106205   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:25.117015   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.126191   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:25.126245   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.135691   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:33:25.144827   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:25.144867   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:25.153834   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:25.201789   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:33:25.201866   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:33:25.306568   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:33:25.306717   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:33:25.306845   66801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:33:25.314339   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:33:25.316173   66801 out.go:235]   - Generating certificates and keys ...
	I1028 18:33:25.316271   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:33:25.316345   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:33:25.316463   66801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:33:25.316571   66801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:33:25.316688   66801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:33:25.316768   66801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:33:25.316857   66801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:33:25.316943   66801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:33:25.317047   66801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:33:25.317149   66801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:33:25.317209   66801 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:33:25.317299   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:33:25.643056   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:33:25.723345   66801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:33:25.831628   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:33:25.908255   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:33:26.215149   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:33:26.215654   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:33:26.218291   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:33:24.111834   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.113295   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.220065   66801 out.go:235]   - Booting up control plane ...
	I1028 18:33:26.220170   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:33:26.220251   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:33:26.220336   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:33:26.239633   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:33:26.245543   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:33:26.245612   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:33:26.378154   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:33:26.378332   66801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:33:26.879957   66801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.937575ms
	I1028 18:33:26.880090   66801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:33:26.365771   67489 pod_ready.go:82] duration metric: took 4m0.000286415s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:26.365796   67489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:26.365812   67489 pod_ready.go:39] duration metric: took 4m12.539631154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:26.365837   67489 kubeadm.go:597] duration metric: took 4m19.835720994s to restartPrimaryControlPlane
	W1028 18:33:26.365884   67489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:26.365910   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:31.882091   66801 kubeadm.go:310] [api-check] The API server is healthy after 5.002114527s
	I1028 18:33:31.897915   66801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:33:31.914311   66801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:33:31.943604   66801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:33:31.943859   66801 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-051152 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:33:31.954350   66801 kubeadm.go:310] [bootstrap-token] Using token: h7eyzq.87sgylc03ke6zhfy
	I1028 18:33:28.613480   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.113034   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.955444   66801 out.go:235]   - Configuring RBAC rules ...
	I1028 18:33:31.955591   66801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:33:31.960749   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:33:31.967695   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:33:31.970863   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:33:31.973924   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:33:31.979191   66801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:33:32.291512   66801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:33:32.714999   66801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:33:33.291889   66801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:33:33.293069   66801 kubeadm.go:310] 
	I1028 18:33:33.293167   66801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:33:33.293182   66801 kubeadm.go:310] 
	I1028 18:33:33.293255   66801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:33:33.293268   66801 kubeadm.go:310] 
	I1028 18:33:33.293307   66801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:33:33.293372   66801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:33:33.293435   66801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:33:33.293447   66801 kubeadm.go:310] 
	I1028 18:33:33.293518   66801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:33:33.293526   66801 kubeadm.go:310] 
	I1028 18:33:33.293595   66801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:33:33.293624   66801 kubeadm.go:310] 
	I1028 18:33:33.293712   66801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:33:33.293842   66801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:33:33.293946   66801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:33:33.293960   66801 kubeadm.go:310] 
	I1028 18:33:33.294117   66801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:33:33.294196   66801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:33:33.294203   66801 kubeadm.go:310] 
	I1028 18:33:33.294276   66801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294385   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:33:33.294414   66801 kubeadm.go:310] 	--control-plane 
	I1028 18:33:33.294427   66801 kubeadm.go:310] 
	I1028 18:33:33.294515   66801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:33:33.294525   66801 kubeadm.go:310] 
	I1028 18:33:33.294629   66801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294774   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:33:33.295715   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:33:33.295839   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:33:33.295852   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:33:33.297447   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:33:33.298607   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:33:33.311113   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:33:33.329576   66801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:33:33.329634   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:33.329680   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-051152 minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=no-preload-051152 minikube.k8s.io/primary=true
	I1028 18:33:33.355186   66801 ops.go:34] apiserver oom_adj: -16
	I1028 18:33:33.509281   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.009672   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.509515   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.010084   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.509359   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.009689   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.509671   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.009884   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.510004   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.615853   66801 kubeadm.go:1113] duration metric: took 4.286272328s to wait for elevateKubeSystemPrivileges
	I1028 18:33:37.615890   66801 kubeadm.go:394] duration metric: took 5m0.313982235s to StartCluster
	I1028 18:33:37.615913   66801 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.616000   66801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:33:37.618418   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.618741   66801 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:33:37.618857   66801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:33:37.618951   66801 addons.go:69] Setting storage-provisioner=true in profile "no-preload-051152"
	I1028 18:33:37.618963   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:33:37.618975   66801 addons.go:69] Setting default-storageclass=true in profile "no-preload-051152"
	I1028 18:33:37.619001   66801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-051152"
	I1028 18:33:37.618973   66801 addons.go:234] Setting addon storage-provisioner=true in "no-preload-051152"
	W1028 18:33:37.619019   66801 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:33:37.619012   66801 addons.go:69] Setting metrics-server=true in profile "no-preload-051152"
	I1028 18:33:37.619043   66801 addons.go:234] Setting addon metrics-server=true in "no-preload-051152"
	I1028 18:33:37.619047   66801 host.go:66] Checking if "no-preload-051152" exists ...
	W1028 18:33:37.619056   66801 addons.go:243] addon metrics-server should already be in state true
	I1028 18:33:37.619097   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.619417   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619446   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619472   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619488   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619487   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619521   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.620738   66801 out.go:177] * Verifying Kubernetes components...
	I1028 18:33:37.622165   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:33:37.636006   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1028 18:33:37.636285   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I1028 18:33:37.636536   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.636621   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.637055   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637082   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637344   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637368   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637419   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637634   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637811   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.638112   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.638157   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.638738   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1028 18:33:37.639176   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.639609   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.639632   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.639918   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.640333   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.640375   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.641571   66801 addons.go:234] Setting addon default-storageclass=true in "no-preload-051152"
	W1028 18:33:37.641592   66801 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:33:37.641620   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.641947   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.641981   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.657758   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I1028 18:33:37.657834   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I1028 18:33:37.657942   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I1028 18:33:37.658187   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658335   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658739   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658752   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658877   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658896   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658931   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.659309   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659358   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659409   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.659428   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.659552   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.659934   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.659964   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.660163   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.660406   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.661568   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.662429   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.663435   66801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:33:37.664414   66801 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:33:33.613699   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:36.111831   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:37.665306   66801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.665324   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:33:37.665343   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.666055   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:33:37.666073   66801 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:33:37.666092   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.668918   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669385   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669519   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.669543   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669754   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.669942   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.670093   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.670266   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.670513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.670556   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.670719   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.670851   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.671014   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.671115   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.677419   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I1028 18:33:37.677828   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.678184   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.678201   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.678476   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.678686   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.680177   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.680403   66801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.680420   66801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:33:37.680437   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.683981   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.684534   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.685007   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.685153   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.685307   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.832104   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:33:37.859406   66801 node_ready.go:35] waiting up to 6m0s for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873437   66801 node_ready.go:49] node "no-preload-051152" has status "Ready":"True"
	I1028 18:33:37.873460   66801 node_ready.go:38] duration metric: took 14.023686ms for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873470   66801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:37.888286   66801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:37.917341   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:33:37.917363   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:33:37.948690   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:33:37.948716   66801 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:33:37.967948   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.971737   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.998758   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:37.998782   66801 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:33:38.034907   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:38.924695   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924720   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.924762   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924828   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925048   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925079   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925093   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925105   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925128   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925131   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925142   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925153   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925154   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925164   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925372   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925397   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925382   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926852   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926857   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.926872   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.955462   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.955492   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.955858   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.955938   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.955953   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373144   66801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.338192413s)
	I1028 18:33:39.373209   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373224   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373512   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373529   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373537   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373544   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373761   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373775   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373785   66801 addons.go:475] Verifying addon metrics-server=true in "no-preload-051152"
	I1028 18:33:39.375584   66801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:33:38.113078   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:40.612141   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.612763   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:39.377031   66801 addons.go:510] duration metric: took 1.758176418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:33:39.906691   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.396083   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:44.894264   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:46.396937   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.397023   66801 pod_ready.go:82] duration metric: took 8.508709164s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.397048   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402560   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.402579   66801 pod_ready.go:82] duration metric: took 5.5155ms for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402588   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406630   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.406646   66801 pod_ready.go:82] duration metric: took 4.052513ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406654   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411238   66801 pod_ready.go:93] pod "kube-proxy-28qht" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.411253   66801 pod_ready.go:82] duration metric: took 4.592983ms for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411260   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414867   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.414880   66801 pod_ready.go:82] duration metric: took 3.615132ms for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414886   66801 pod_ready.go:39] duration metric: took 8.541406133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:46.414900   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:33:46.414943   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:33:46.430889   66801 api_server.go:72] duration metric: took 8.81211088s to wait for apiserver process to appear ...
	I1028 18:33:46.430907   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:33:46.430925   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:33:46.435248   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:33:46.435963   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:33:46.435978   66801 api_server.go:131] duration metric: took 5.065719ms to wait for apiserver health ...
	I1028 18:33:46.435984   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:33:46.596186   66801 system_pods.go:59] 9 kube-system pods found
	I1028 18:33:46.596222   66801 system_pods.go:61] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.596230   66801 system_pods.go:61] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.596234   66801 system_pods.go:61] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.596238   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.596242   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.596246   66801 system_pods.go:61] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.596252   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.596301   66801 system_pods.go:61] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.596317   66801 system_pods.go:61] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.596324   66801 system_pods.go:74] duration metric: took 160.335823ms to wait for pod list to return data ...
	I1028 18:33:46.596341   66801 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:33:46.793115   66801 default_sa.go:45] found service account: "default"
	I1028 18:33:46.793147   66801 default_sa.go:55] duration metric: took 196.795286ms for default service account to be created ...
	I1028 18:33:46.793157   66801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:33:46.995868   66801 system_pods.go:86] 9 kube-system pods found
	I1028 18:33:46.995899   66801 system_pods.go:89] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.995905   66801 system_pods.go:89] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.995909   66801 system_pods.go:89] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.995912   66801 system_pods.go:89] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.995917   66801 system_pods.go:89] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.995920   66801 system_pods.go:89] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.995924   66801 system_pods.go:89] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.995929   66801 system_pods.go:89] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.995934   66801 system_pods.go:89] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.995941   66801 system_pods.go:126] duration metric: took 202.778451ms to wait for k8s-apps to be running ...
	I1028 18:33:46.995946   66801 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:33:46.995990   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:47.011260   66801 system_svc.go:56] duration metric: took 15.302599ms WaitForService to wait for kubelet
	I1028 18:33:47.011285   66801 kubeadm.go:582] duration metric: took 9.392510785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:33:47.011303   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:33:47.193217   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:33:47.193239   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:33:47.193250   66801 node_conditions.go:105] duration metric: took 181.942948ms to run NodePressure ...
	I1028 18:33:47.193261   66801 start.go:241] waiting for startup goroutines ...
	I1028 18:33:47.193267   66801 start.go:246] waiting for cluster config update ...
	I1028 18:33:47.193278   66801 start.go:255] writing updated cluster config ...
	I1028 18:33:47.193529   66801 ssh_runner.go:195] Run: rm -f paused
	I1028 18:33:47.240247   66801 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:33:47.242139   66801 out.go:177] * Done! kubectl is now configured to use "no-preload-051152" cluster and "default" namespace by default
	I1028 18:33:45.112037   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:47.112764   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:48.107354   66600 pod_ready.go:82] duration metric: took 4m0.001062902s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:48.107377   66600 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:48.107395   66600 pod_ready.go:39] duration metric: took 4m13.535788316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:48.107420   66600 kubeadm.go:597] duration metric: took 4m22.316644235s to restartPrimaryControlPlane
	W1028 18:33:48.107467   66600 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:48.107490   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:52.667497   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.301566887s)
	I1028 18:33:52.667559   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:52.683580   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:52.695334   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:52.705505   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:52.705524   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:52.705569   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:33:52.714922   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:52.714969   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:52.724156   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:33:52.733125   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:52.733161   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:52.742369   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.751021   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:52.751065   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.760543   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:33:52.770939   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:52.770985   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:52.781890   67489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:52.961562   67489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:01.798408   67489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:01.798470   67489 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:01.798580   67489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:01.798724   67489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:01.798811   67489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:01.798882   67489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:01.800228   67489 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:01.800320   67489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:01.800392   67489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:01.800486   67489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:01.800580   67489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:01.800641   67489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:01.800694   67489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:01.800764   67489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:01.800842   67489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:01.800955   67489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:01.801019   67489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:01.801053   67489 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:01.801102   67489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:01.801145   67489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:01.801196   67489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:01.801252   67489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:01.801316   67489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:01.801409   67489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:01.801513   67489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:01.801605   67489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:01.802967   67489 out.go:235]   - Booting up control plane ...
	I1028 18:34:01.803061   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:01.803169   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:01.803254   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:01.803376   67489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:01.803488   67489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:01.803558   67489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:01.803685   67489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:01.803800   67489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:01.803869   67489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.148945ms
	I1028 18:34:01.803933   67489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:01.803986   67489 kubeadm.go:310] [api-check] The API server is healthy after 5.003798359s
	I1028 18:34:01.804081   67489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:01.804187   67489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:01.804240   67489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:01.804438   67489 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-692033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:01.804533   67489 kubeadm.go:310] [bootstrap-token] Using token: wy8zqj.38m6tcr6hp7sgzod
	I1028 18:34:01.805760   67489 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:01.805856   67489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:01.805949   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:01.806108   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:01.806233   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:01.806378   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:01.806464   67489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:01.806579   67489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:01.806633   67489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:01.806673   67489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:01.806679   67489 kubeadm.go:310] 
	I1028 18:34:01.806735   67489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:01.806746   67489 kubeadm.go:310] 
	I1028 18:34:01.806836   67489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:01.806844   67489 kubeadm.go:310] 
	I1028 18:34:01.806880   67489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:01.806957   67489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:01.807001   67489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:01.807007   67489 kubeadm.go:310] 
	I1028 18:34:01.807060   67489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:01.807071   67489 kubeadm.go:310] 
	I1028 18:34:01.807112   67489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:01.807118   67489 kubeadm.go:310] 
	I1028 18:34:01.807171   67489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:01.807246   67489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:01.807307   67489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:01.807313   67489 kubeadm.go:310] 
	I1028 18:34:01.807387   67489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:01.807454   67489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:01.807465   67489 kubeadm.go:310] 
	I1028 18:34:01.807538   67489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807634   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:01.807655   67489 kubeadm.go:310] 	--control-plane 
	I1028 18:34:01.807661   67489 kubeadm.go:310] 
	I1028 18:34:01.807730   67489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:01.807739   67489 kubeadm.go:310] 
	I1028 18:34:01.807810   67489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807913   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:01.807923   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:34:01.807929   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:01.809168   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:01.810293   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:01.822030   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:01.842831   67489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:01.842908   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:01.842963   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-692033 minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=default-k8s-diff-port-692033 minikube.k8s.io/primary=true
	I1028 18:34:01.875265   67489 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:02.050422   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:02.550824   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.050477   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.551245   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.051177   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.550572   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.051071   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.550926   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.638447   67489 kubeadm.go:1113] duration metric: took 3.795598924s to wait for elevateKubeSystemPrivileges
	I1028 18:34:05.638483   67489 kubeadm.go:394] duration metric: took 4m59.162037455s to StartCluster
	I1028 18:34:05.638504   67489 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.638591   67489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:05.641196   67489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.641497   67489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:05.641626   67489 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:05.641720   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:05.641730   67489 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641748   67489 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641760   67489 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:05.641776   67489 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641781   67489 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641792   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.641794   67489 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641803   67489 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:05.641804   67489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-692033"
	I1028 18:34:05.641832   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.642210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642217   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642229   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642245   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642255   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642314   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642905   67489 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:05.644361   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:05.658478   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I1028 18:34:05.658586   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1028 18:34:05.659040   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659044   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659524   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659546   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659701   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659724   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659879   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660044   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660111   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.660610   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.660648   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.661748   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1028 18:34:05.662150   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.662607   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.662627   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.662983   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.662991   67489 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.663006   67489 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:05.663029   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.663294   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663334   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.663531   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663572   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.675955   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1028 18:34:05.676345   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.676784   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.676802   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.677154   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.677358   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.678723   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I1028 18:34:05.678897   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1028 18:34:05.679025   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.679243   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679337   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679700   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679715   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.679805   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679823   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.680500   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680506   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680706   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.680834   67489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:05.681042   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.681070   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.681982   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:05.682005   67489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:05.682035   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.682363   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.683806   67489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:05.684992   67489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.685011   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:05.685029   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.686903   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.686957   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.686973   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.687218   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.687429   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.687693   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.687850   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.688516   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.688908   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.688933   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.689193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.689372   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.689513   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.689655   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.696743   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I1028 18:34:05.697029   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.697432   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.697458   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.697697   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.697843   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.699192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.699397   67489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.699405   67489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:05.699416   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.702897   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.703368   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703483   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.703667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.703841   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.703996   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.838049   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:05.857829   67489 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866141   67489 node_ready.go:49] node "default-k8s-diff-port-692033" has status "Ready":"True"
	I1028 18:34:05.866158   67489 node_ready.go:38] duration metric: took 8.296617ms for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866167   67489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:05.873027   67489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:05.927585   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:05.927608   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:05.928743   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.946390   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.961712   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:05.961734   67489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:05.993688   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:05.993711   67489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:06.097871   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:06.696189   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696226   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696195   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696300   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696696   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696713   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696697   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696721   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696735   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696742   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696750   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696722   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696794   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696984   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697000   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.697027   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697036   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.720324   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.720346   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.720649   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.720668   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262166   67489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.164245646s)
	I1028 18:34:07.262256   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262277   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262587   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262608   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262607   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262616   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262625   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262890   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262923   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262936   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262948   67489 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-692033"
	I1028 18:34:07.264414   67489 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:07.265449   67489 addons.go:510] duration metric: took 1.623834435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:07.882264   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.313629   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.206119005s)
	I1028 18:34:14.313702   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:14.329212   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:34:14.339407   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:14.349645   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:14.349669   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:14.349716   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:14.359332   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:14.359384   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:14.369627   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:14.381040   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:14.381098   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:14.390359   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.399743   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:14.399783   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.408932   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:14.417840   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:14.417876   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:14.427234   66600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:14.472502   66600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:14.472593   66600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:14.578311   66600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:14.578456   66600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:14.578576   66600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:14.586748   66600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:10.380304   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:12.878632   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.878951   67489 pod_ready.go:93] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:14.878974   67489 pod_ready.go:82] duration metric: took 9.005915421s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:14.878983   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385215   67489 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.385239   67489 pod_ready.go:82] duration metric: took 506.249352ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385250   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390412   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.390435   67489 pod_ready.go:82] duration metric: took 5.177559ms for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390448   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395252   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.395272   67489 pod_ready.go:82] duration metric: took 4.816812ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395281   67489 pod_ready.go:39] duration metric: took 9.52910413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:15.395298   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:15.395349   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:15.413693   67489 api_server.go:72] duration metric: took 9.772160727s to wait for apiserver process to appear ...
	I1028 18:34:15.413715   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:15.413734   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:34:15.417780   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:34:15.418688   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:15.418712   67489 api_server.go:131] duration metric: took 4.989226ms to wait for apiserver health ...
	I1028 18:34:15.418720   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:15.424285   67489 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:15.424306   67489 system_pods.go:61] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.424310   67489 system_pods.go:61] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.424315   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.424318   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.424323   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.424327   67489 system_pods.go:61] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.424331   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.424337   67489 system_pods.go:61] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.424344   67489 system_pods.go:61] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.424351   67489 system_pods.go:74] duration metric: took 5.625205ms to wait for pod list to return data ...
	I1028 18:34:15.424359   67489 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:15.427132   67489 default_sa.go:45] found service account: "default"
	I1028 18:34:15.427153   67489 default_sa.go:55] duration metric: took 2.788005ms for default service account to be created ...
	I1028 18:34:15.427161   67489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:15.479404   67489 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:15.479427   67489 system_pods.go:89] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.479433   67489 system_pods.go:89] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.479436   67489 system_pods.go:89] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.479443   67489 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.479448   67489 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.479453   67489 system_pods.go:89] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.479460   67489 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.479472   67489 system_pods.go:89] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.479477   67489 system_pods.go:89] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.479491   67489 system_pods.go:126] duration metric: took 52.324012ms to wait for k8s-apps to be running ...
	I1028 18:34:15.479502   67489 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:15.479548   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:15.493743   67489 system_svc.go:56] duration metric: took 14.233947ms WaitForService to wait for kubelet
	I1028 18:34:15.493772   67489 kubeadm.go:582] duration metric: took 9.852243286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:15.493796   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:15.677127   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:15.677149   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:15.677156   67489 node_conditions.go:105] duration metric: took 183.355591ms to run NodePressure ...
	I1028 18:34:15.677167   67489 start.go:241] waiting for startup goroutines ...
	I1028 18:34:15.677174   67489 start.go:246] waiting for cluster config update ...
	I1028 18:34:15.677183   67489 start.go:255] writing updated cluster config ...
	I1028 18:34:15.677419   67489 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:15.731157   67489 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:15.732912   67489 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-692033" cluster and "default" namespace by default
	I1028 18:34:14.588528   66600 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:14.588660   66600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:14.588749   66600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:14.588886   66600 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:14.588985   66600 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:14.589089   66600 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:14.589179   66600 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:14.589268   66600 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:14.589362   66600 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:14.589472   66600 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:14.589575   66600 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:14.589638   66600 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:14.589739   66600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:14.902456   66600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:15.107236   66600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:15.198073   66600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:15.618175   66600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:15.804761   66600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:15.805675   66600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:15.809860   66600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:15.811538   66600 out.go:235]   - Booting up control plane ...
	I1028 18:34:15.811658   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:15.811761   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:15.812969   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:15.838182   66600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:15.846044   66600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:15.846126   66600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:15.981748   66600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:15.981899   66600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:16.483112   66600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262752ms
	I1028 18:34:16.483242   66600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:21.484655   66600 kubeadm.go:310] [api-check] The API server is healthy after 5.001327308s
	I1028 18:34:21.498067   66600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:21.508713   66600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:21.537520   66600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:21.537724   66600 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-021370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:21.551416   66600 kubeadm.go:310] [bootstrap-token] Using token: c2otm2.eh2uwearn2r38epe
	I1028 18:34:21.552613   66600 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:21.552721   66600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:21.556871   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:21.563570   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:21.566336   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:21.569226   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:21.575090   66600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:21.890874   66600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:22.315363   66600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:22.892050   66600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:22.892097   66600 kubeadm.go:310] 
	I1028 18:34:22.892198   66600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:22.892214   66600 kubeadm.go:310] 
	I1028 18:34:22.892297   66600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:22.892308   66600 kubeadm.go:310] 
	I1028 18:34:22.892346   66600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:22.892457   66600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:22.892549   66600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:22.892559   66600 kubeadm.go:310] 
	I1028 18:34:22.892628   66600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:22.892643   66600 kubeadm.go:310] 
	I1028 18:34:22.892705   66600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:22.892715   66600 kubeadm.go:310] 
	I1028 18:34:22.892784   66600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:22.892851   66600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:22.892958   66600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:22.892981   66600 kubeadm.go:310] 
	I1028 18:34:22.893093   66600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:22.893197   66600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:22.893212   66600 kubeadm.go:310] 
	I1028 18:34:22.893320   66600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893460   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:22.893506   66600 kubeadm.go:310] 	--control-plane 
	I1028 18:34:22.893515   66600 kubeadm.go:310] 
	I1028 18:34:22.893622   66600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:22.893631   66600 kubeadm.go:310] 
	I1028 18:34:22.893728   66600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893886   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:22.894813   66600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:22.895022   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:34:22.895037   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:22.897376   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:22.898532   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:22.909363   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:22.930151   66600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:22.930190   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:22.930280   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-021370 minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=embed-certs-021370 minikube.k8s.io/primary=true
	I1028 18:34:22.963249   66600 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:23.216574   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:23.717592   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.217674   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.717602   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.216832   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.717673   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.217668   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.716727   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.217476   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.343171   66600 kubeadm.go:1113] duration metric: took 4.413029537s to wait for elevateKubeSystemPrivileges
	I1028 18:34:27.343201   66600 kubeadm.go:394] duration metric: took 5m1.603783417s to StartCluster
	I1028 18:34:27.343221   66600 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.343302   66600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:27.344913   66600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.345149   66600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:27.345210   66600 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:27.345282   66600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-021370"
	I1028 18:34:27.345297   66600 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-021370"
	W1028 18:34:27.345304   66600 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:27.345310   66600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-021370"
	I1028 18:34:27.345339   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345337   66600 addons.go:69] Setting metrics-server=true in profile "embed-certs-021370"
	I1028 18:34:27.345353   66600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-021370"
	I1028 18:34:27.345360   66600 addons.go:234] Setting addon metrics-server=true in "embed-certs-021370"
	W1028 18:34:27.345369   66600 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:27.345381   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:27.345396   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345742   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345788   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345794   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345798   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.346770   66600 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:27.348169   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:27.361310   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1028 18:34:27.361763   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362073   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 18:34:27.362257   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.362292   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.362550   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362640   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363049   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.363079   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.363204   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.363242   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.363425   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363610   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.363934   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1028 18:34:27.364390   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.364865   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.364885   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.365229   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.365805   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.365852   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.367292   66600 addons.go:234] Setting addon default-storageclass=true in "embed-certs-021370"
	W1028 18:34:27.367314   66600 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:27.367347   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.367738   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.367782   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.381375   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1028 18:34:27.381846   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.382429   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.382441   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.382787   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.382926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.382965   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I1028 18:34:27.383568   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.384121   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.384134   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.384530   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.384730   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.384815   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386107   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I1028 18:34:27.386306   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386435   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.386888   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.386911   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.386977   66600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:27.387284   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.387866   66600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:27.387883   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.388259   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.388628   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:27.388645   66600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:27.388658   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.390614   66600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.390634   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:27.390650   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.393252   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393734   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.393758   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.394122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.394238   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.394364   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.394640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395084   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.395110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.395383   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.395540   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.395677   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.406551   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1028 18:34:27.406907   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.407358   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.407376   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.407699   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.407891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.409287   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.409489   66600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.409502   66600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:27.409517   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.412275   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412828   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.412858   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412984   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.413162   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.413303   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.413453   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.546891   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:27.571837   66600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595105   66600 node_ready.go:49] node "embed-certs-021370" has status "Ready":"True"
	I1028 18:34:27.595127   66600 node_ready.go:38] duration metric: took 23.255834ms for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595156   66600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:27.603107   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:27.635422   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.657051   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.666085   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:27.666110   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:27.706366   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:27.706394   66600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:27.772162   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:27.772191   66600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:27.844116   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:28.411454   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411478   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411522   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411544   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411751   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.411960   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.411982   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.411991   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411998   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.412223   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.412266   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413310   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413326   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413338   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.413344   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.413569   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413584   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.420867   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.420891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.421092   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.421168   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.421169   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957337   66600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.11317187s)
	I1028 18:34:28.957385   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957395   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957696   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957715   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957725   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957733   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957957   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957970   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957988   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957990   66600 addons.go:475] Verifying addon metrics-server=true in "embed-certs-021370"
	I1028 18:34:28.959590   66600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:28.961127   66600 addons.go:510] duration metric: took 1.615922156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:29.611126   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:32.110577   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:34.610544   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:37.111319   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.111342   66600 pod_ready.go:82] duration metric: took 9.508204126s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.111351   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119547   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.119571   66600 pod_ready.go:82] duration metric: took 8.212577ms for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119581   66600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126030   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.126048   66600 pod_ready.go:82] duration metric: took 6.46043ms for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126056   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132366   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.132386   66600 pod_ready.go:82] duration metric: took 6.323715ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132394   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137151   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.137171   66600 pod_ready.go:82] duration metric: took 4.770272ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137182   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507159   66600 pod_ready.go:93] pod "kube-proxy-nrr6g" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.507180   66600 pod_ready.go:82] duration metric: took 369.991591ms for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507189   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908006   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.908030   66600 pod_ready.go:82] duration metric: took 400.834669ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908038   66600 pod_ready.go:39] duration metric: took 10.312872321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:37.908052   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:37.908098   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:37.924515   66600 api_server.go:72] duration metric: took 10.579335154s to wait for apiserver process to appear ...
	I1028 18:34:37.924552   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:37.924572   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:34:37.929438   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:34:37.930716   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:37.930742   66600 api_server.go:131] duration metric: took 6.181503ms to wait for apiserver health ...
	I1028 18:34:37.930752   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:38.113401   66600 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:38.113430   66600 system_pods.go:61] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.113435   66600 system_pods.go:61] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.113439   66600 system_pods.go:61] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.113442   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.113446   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.113449   66600 system_pods.go:61] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.113452   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.113457   66600 system_pods.go:61] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.113462   66600 system_pods.go:61] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.113468   66600 system_pods.go:74] duration metric: took 182.711396ms to wait for pod list to return data ...
	I1028 18:34:38.113475   66600 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:38.309139   66600 default_sa.go:45] found service account: "default"
	I1028 18:34:38.309170   66600 default_sa.go:55] duration metric: took 195.688587ms for default service account to be created ...
	I1028 18:34:38.309182   66600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:38.510307   66600 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:38.510336   66600 system_pods.go:89] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.510341   66600 system_pods.go:89] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.510345   66600 system_pods.go:89] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.510349   66600 system_pods.go:89] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.510352   66600 system_pods.go:89] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.510355   66600 system_pods.go:89] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.510360   66600 system_pods.go:89] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.510368   66600 system_pods.go:89] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.510376   66600 system_pods.go:89] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.510391   66600 system_pods.go:126] duration metric: took 201.199416ms to wait for k8s-apps to be running ...
	I1028 18:34:38.510403   66600 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:38.510448   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:38.526043   66600 system_svc.go:56] duration metric: took 15.628796ms WaitForService to wait for kubelet
	I1028 18:34:38.526075   66600 kubeadm.go:582] duration metric: took 11.18089878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:38.526109   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:38.707568   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:38.707594   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:38.707604   66600 node_conditions.go:105] duration metric: took 181.491056ms to run NodePressure ...
	I1028 18:34:38.707615   66600 start.go:241] waiting for startup goroutines ...
	I1028 18:34:38.707621   66600 start.go:246] waiting for cluster config update ...
	I1028 18:34:38.707631   66600 start.go:255] writing updated cluster config ...
	I1028 18:34:38.707950   66600 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:38.755355   66600 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:38.757256   66600 out.go:177] * Done! kubectl is now configured to use "embed-certs-021370" cluster and "default" namespace by default
	I1028 18:34:49.381931   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:34:49.382111   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:34:49.383570   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:34:49.383633   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:49.383732   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:49.383859   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:49.383975   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:34:49.384073   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:49.385654   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:49.385757   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:49.385847   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:49.385937   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:49.386008   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:49.386118   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:49.386214   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:49.386316   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:49.386391   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:49.386478   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:49.386597   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:49.386643   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:49.386724   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:49.386813   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:49.386891   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:49.386983   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:49.387070   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:49.387209   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:49.387330   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:49.387389   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:49.387474   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:49.389653   67149 out.go:235]   - Booting up control plane ...
	I1028 18:34:49.389760   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:49.389867   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:49.389971   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:49.390088   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:49.390228   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:34:49.390277   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:34:49.390355   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390550   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390645   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390832   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390903   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391069   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391163   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391354   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391452   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391649   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391657   67149 kubeadm.go:310] 
	I1028 18:34:49.391691   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:34:49.391743   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:34:49.391758   67149 kubeadm.go:310] 
	I1028 18:34:49.391789   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:34:49.391822   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:34:49.391908   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:34:49.391914   67149 kubeadm.go:310] 
	I1028 18:34:49.392024   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:34:49.392073   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:34:49.392133   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:34:49.392142   67149 kubeadm.go:310] 
	I1028 18:34:49.392267   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:34:49.392363   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:34:49.392380   67149 kubeadm.go:310] 
	I1028 18:34:49.392525   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:34:49.392629   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:34:49.392737   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:34:49.392830   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:34:49.392879   67149 kubeadm.go:310] 
	W1028 18:34:49.392949   67149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:34:49.392991   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:34:49.869859   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:49.884524   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:49.896293   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:49.896318   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:49.896354   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:49.907312   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:49.907364   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:49.917926   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:49.928001   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:49.928048   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:49.938687   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.949217   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:49.949268   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.959955   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:49.970105   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:49.970156   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:49.980760   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:50.212973   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:36:46.686631   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:36:46.686753   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:36:46.688224   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:36:46.688325   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:36:46.688449   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:36:46.688587   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:36:46.688726   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:36:46.688813   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:36:46.690320   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:36:46.690427   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:36:46.690524   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:36:46.690627   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:36:46.690720   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:36:46.690824   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:36:46.690897   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:36:46.690984   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:36:46.691064   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:36:46.691161   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:36:46.691253   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:36:46.691309   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:36:46.691379   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:36:46.691426   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:36:46.691471   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:36:46.691547   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:36:46.691619   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:36:46.691713   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:36:46.691814   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:36:46.691864   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:36:46.691951   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:36:46.693258   67149 out.go:235]   - Booting up control plane ...
	I1028 18:36:46.693374   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:36:46.693471   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:36:46.693566   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:36:46.693682   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:36:46.693870   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:36:46.693930   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:36:46.694023   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694253   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694343   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694527   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694614   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694798   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694894   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695053   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695119   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695315   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695324   67149 kubeadm.go:310] 
	I1028 18:36:46.695357   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:36:46.695392   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:36:46.695398   67149 kubeadm.go:310] 
	I1028 18:36:46.695427   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:36:46.695456   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:36:46.695542   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:36:46.695549   67149 kubeadm.go:310] 
	I1028 18:36:46.695665   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:36:46.695717   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:36:46.695767   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:36:46.695781   67149 kubeadm.go:310] 
	I1028 18:36:46.695921   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:36:46.696037   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:36:46.696048   67149 kubeadm.go:310] 
	I1028 18:36:46.696177   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:36:46.696285   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:36:46.696390   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:36:46.696512   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:36:46.696560   67149 kubeadm.go:310] 
	I1028 18:36:46.696579   67149 kubeadm.go:394] duration metric: took 7m56.601380499s to StartCluster
	I1028 18:36:46.696618   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:36:46.696670   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:36:46.738714   67149 cri.go:89] found id: ""
	I1028 18:36:46.738741   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.738749   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:36:46.738757   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:36:46.738822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:36:46.772906   67149 cri.go:89] found id: ""
	I1028 18:36:46.772934   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.772944   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:36:46.772951   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:36:46.773028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:36:46.808785   67149 cri.go:89] found id: ""
	I1028 18:36:46.808809   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.808819   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:36:46.808827   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:36:46.808884   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:36:46.842977   67149 cri.go:89] found id: ""
	I1028 18:36:46.843007   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.843016   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:36:46.843022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:36:46.843095   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:36:46.878121   67149 cri.go:89] found id: ""
	I1028 18:36:46.878148   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.878159   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:36:46.878166   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:36:46.878231   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:36:46.911953   67149 cri.go:89] found id: ""
	I1028 18:36:46.911977   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.911984   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:36:46.911990   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:36:46.912054   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:36:46.944291   67149 cri.go:89] found id: ""
	I1028 18:36:46.944317   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.944324   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:36:46.944329   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:36:46.944379   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:36:46.976525   67149 cri.go:89] found id: ""
	I1028 18:36:46.976554   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.976564   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:36:46.976575   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:36:46.976588   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:36:47.026517   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:36:47.026544   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:36:47.041198   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:36:47.041231   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:36:47.115650   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:36:47.115681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:36:47.115695   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:36:47.218059   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:36:47.218093   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 18:36:47.257114   67149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:36:47.257182   67149 out.go:270] * 
	W1028 18:36:47.257240   67149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.257280   67149 out.go:270] * 
	W1028 18:36:47.258088   67149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:36:47.261521   67149 out.go:201] 
	W1028 18:36:47.262707   67149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.262742   67149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:36:47.262760   67149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:36:47.264073   67149 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.620588474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140997620568321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da94c113-7b43-495a-8c01-0163c4f6a9c0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.621226765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e4fb569-d90f-4410-a72e-d2651ca62b05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.621275193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e4fb569-d90f-4410-a72e-d2651ca62b05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.621513843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e4fb569-d90f-4410-a72e-d2651ca62b05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.656547313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48833224-3d10-43d9-91a8-19645094daa9 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.656655862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48833224-3d10-43d9-91a8-19645094daa9 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.657769965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99d9b01e-6dbf-4d19-bc69-1066f18e930b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.658494044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140997658463537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99d9b01e-6dbf-4d19-bc69-1066f18e930b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.659552548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42b0a90f-7c6f-4a87-bc3b-75288f83ce05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.659620681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42b0a90f-7c6f-4a87-bc3b-75288f83ce05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.659886598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42b0a90f-7c6f-4a87-bc3b-75288f83ce05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.697262495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f69092f6-2355-48d9-987c-112efc239271 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.697368291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f69092f6-2355-48d9-987c-112efc239271 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.698615192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb77aa75-fd65-47f9-8447-e8ce376a422c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.699120464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140997699097604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb77aa75-fd65-47f9-8447-e8ce376a422c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.699655173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dcf9a84-ad51-483a-a9d2-e07865d171be name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.699720803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dcf9a84-ad51-483a-a9d2-e07865d171be name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.699987385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dcf9a84-ad51-483a-a9d2-e07865d171be name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.733242418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35cfac7c-b64e-4dc9-82a6-3e51e174c294 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.733363347Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35cfac7c-b64e-4dc9-82a6-3e51e174c294 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.734725289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25336ac0-2e18-4cca-930f-3aaa431a84fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.735317962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140997735294306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25336ac0-2e18-4cca-930f-3aaa431a84fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.736213287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7eb8d7b6-f57d-41e5-ad34-86abe7de4ffb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.736287231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7eb8d7b6-f57d-41e5-ad34-86abe7de4ffb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:17 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:43:17.736485068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7eb8d7b6-f57d-41e5-ad34-86abe7de4ffb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60c0aac9932e8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   42c2a34c0cb4c       coredns-7c65d6cfc9-rhvmm
	405dd9d867300       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   9bda050b81c88       storage-provisioner
	569ca30401d69       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   f7365a572ebb0       coredns-7c65d6cfc9-25sf7
	150d2b4f66144       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   bf45e668a8b5d       kube-proxy-b56jx
	044fcc47181f7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   dcb9b277b485f       kube-scheduler-default-k8s-diff-port-692033
	7fc4b09c10022       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   b3d5706a965ff       etcd-default-k8s-diff-port-692033
	78fb2afab1c5b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   7a50831184579       kube-apiserver-default-k8s-diff-port-692033
	f82fa01be0383       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   cb0bb21858003       kube-controller-manager-default-k8s-diff-port-692033
	7e512fff51a6b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   42c267a91b0ae       kube-apiserver-default-k8s-diff-port-692033
	
	
	==> coredns [569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-692033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-692033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=default-k8s-diff-port-692033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:33:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-692033
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:43:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:39:17 +0000   Mon, 28 Oct 2024 18:33:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:39:17 +0000   Mon, 28 Oct 2024 18:33:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:39:17 +0000   Mon, 28 Oct 2024 18:33:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:39:17 +0000   Mon, 28 Oct 2024 18:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    default-k8s-diff-port-692033
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df22995e8bda4630892d9a7d579ec690
	  System UUID:                df22995e-8bda-4630-892d-9a7d579ec690
	  Boot ID:                    d9a76dc0-ef12-43e1-8b0b-0c10f8a07301
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-25sf7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-rhvmm                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-692033                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-692033             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-692033    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-b56jx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-692033             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-8vz62                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node default-k8s-diff-port-692033 event: Registered Node default-k8s-diff-port-692033 in Controller
	
	
	==> dmesg <==
	[  +0.055872] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.269265] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.563578] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.379279] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 18:29] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.055965] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055826] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.194006] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.130454] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.305093] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.232771] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +2.275710] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.059753] kauditd_printk_skb: 158 callbacks suppressed
	[  +4.998100] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.328196] kauditd_printk_skb: 54 callbacks suppressed
	[Oct28 18:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.332543] systemd-fstab-generator[2608]: Ignoring "noauto" option for root device
	[  +4.560979] kauditd_printk_skb: 56 callbacks suppressed
	[Oct28 18:34] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +4.879427] systemd-fstab-generator[3041]: Ignoring "noauto" option for root device
	[  +0.096804] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.297258] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb] <==
	{"level":"info","ts":"2024-10-28T18:33:56.331165Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T18:33:56.333403Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ce9e8f286885b37e","initial-advertise-peer-urls":["https://192.168.39.215:2380"],"listen-peer-urls":["https://192.168.39.215:2380"],"advertise-client-urls":["https://192.168.39.215:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.215:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T18:33:56.333219Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-10-28T18:33:56.334052Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-10-28T18:33:56.334094Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T18:33:56.991324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T18:33:56.991388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T18:33:56.991413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgPreVoteResp from ce9e8f286885b37e at term 1"}
	{"level":"info","ts":"2024-10-28T18:33:56.991425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:56.991431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgVoteResp from ce9e8f286885b37e at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:56.991440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became leader at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:56.991457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce9e8f286885b37e elected leader ce9e8f286885b37e at term 2"}
	{"level":"info","ts":"2024-10-28T18:33:56.992712Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ce9e8f286885b37e","local-member-attributes":"{Name:default-k8s-diff-port-692033 ClientURLs:[https://192.168.39.215:2379]}","request-path":"/0/members/ce9e8f286885b37e/attributes","cluster-id":"4cd5d1376c5e8c88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T18:33:56.992755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:33:56.992792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:33:56.993046Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:56.993865Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:33:56.994656Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T18:33:56.994825Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:33:56.994853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T18:33:56.996382Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:33:56.998634Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.215:2379"}
	{"level":"info","ts":"2024-10-28T18:33:57.002036Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4cd5d1376c5e8c88","local-member-id":"ce9e8f286885b37e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:57.002172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:57.002982Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:43:18 up 14 min,  0 users,  load average: 0.39, 0.30, 0.17
	Linux default-k8s-diff-port-692033 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862] <==
	W1028 18:38:59.456859       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:38:59.457262       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:38:59.458443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:38:59.458522       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:39:59.459193       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:39:59.459236       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:39:59.459450       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 18:39:59.459521       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:39:59.460814       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:39:59.460867       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:41:59.461400       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:41:59.461526       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 18:41:59.461771       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:41:59.461957       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:41:59.462688       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:41:59.463786       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df] <==
	W1028 18:33:49.120170       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.120177       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.158146       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.171791       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.177249       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.204467       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.208739       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.233468       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.260466       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.337849       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.347689       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.347785       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.356351       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.370185       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.409261       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.422978       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.601743       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.627147       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.627545       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.658781       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.768606       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.838876       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.962703       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:50.146043       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:50.179239       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6] <==
	E1028 18:38:05.437813       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:38:05.922101       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:38:35.443885       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:38:35.929738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:39:05.450076       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:39:05.937735       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:39:17.594578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-692033"
	E1028 18:39:35.456722       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:39:35.948004       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:40:04.115659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="242.07µs"
	E1028 18:40:05.465690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:40:05.954711       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:40:19.111741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="114.797µs"
	E1028 18:40:35.472332       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:40:35.962869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:41:05.478294       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:41:05.971571       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:41:35.485572       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:41:35.982190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:42:05.493473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:42:05.989647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:42:35.502052       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:42:35.997100       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:43:05.508158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:43:06.005346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:34:07.356840       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:34:07.374545       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E1028 18:34:07.374637       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:34:07.620280       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:34:07.620337       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:34:07.620369       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:34:07.655596       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:34:07.655811       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:34:07.655823       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:34:07.659393       1 config.go:199] "Starting service config controller"
	I1028 18:34:07.659406       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:34:07.659420       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:34:07.659423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:34:07.659809       1 config.go:328] "Starting node config controller"
	I1028 18:34:07.659841       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:34:07.760606       1 shared_informer.go:320] Caches are synced for node config
	I1028 18:34:07.760652       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:34:07.760662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b] <==
	W1028 18:33:58.479993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:33:58.480006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 18:33:58.480076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 18:33:58.480144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:33:58.480210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.479508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:58.480484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:33:58.481048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 18:33:58.481243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.335172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 18:33:59.335224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.358647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 18:33:59.358702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.390271       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 18:33:59.390403       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 18:33:59.615126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:33:59.615160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.627660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:59.627787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 18:34:01.265099       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 18:42:01 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:01.303994    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140921303320117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:11 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:11.306157    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140931305590485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:11 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:11.306538    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140931305590485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:14 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:14.096842    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:42:21 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:21.308696    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140941308276058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:21 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:21.308791    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140941308276058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:26 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:26.097802    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:42:31 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:31.310502    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140951309845530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:31 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:31.310866    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140951309845530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:38 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:38.097527    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:42:41 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:41.312414    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140961311999680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:41 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:41.312457    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140961311999680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:51 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:51.313737    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140971313436362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:51 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:51.313778    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140971313436362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:52 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:42:52.097335    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:43:01 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:43:01.125324    2937 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 18:43:01 default-k8s-diff-port-692033 kubelet[2937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 18:43:01 default-k8s-diff-port-692033 kubelet[2937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 18:43:01 default-k8s-diff-port-692033 kubelet[2937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 18:43:01 default-k8s-diff-port-692033 kubelet[2937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 18:43:01 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:43:01.315257    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140981314863539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:01 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:43:01.315413    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140981314863539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:05 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:43:05.099097    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:43:11 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:43:11.322668    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140991321539176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:11 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:43:11.322784    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140991321539176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7] <==
	I1028 18:34:07.700230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 18:34:07.715488       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 18:34:07.715714       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 18:34:07.723879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 18:34:07.724170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-692033_25241bb0-fdda-4304-ae05-b56a6882e94d!
	I1028 18:34:07.724271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3cc5af26-302e-492f-881a-248b50a59ab1", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-692033_25241bb0-fdda-4304-ae05-b56a6882e94d became leader
	I1028 18:34:07.825397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-692033_25241bb0-fdda-4304-ae05-b56a6882e94d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8vz62
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 describe pod metrics-server-6867b74b74-8vz62
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-692033 describe pod metrics-server-6867b74b74-8vz62: exit status 1 (61.067131ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8vz62" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-692033 describe pod metrics-server-6867b74b74-8vz62: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1028 18:35:01.467847   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:35:33.435882   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021370 -n embed-certs-021370
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 18:43:39.263428896 +0000 UTC m=+5858.721473210
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-021370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-021370 logs -n 25: (1.896651007s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-703793                              | running-upgrade-703793       | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-021370            | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:25:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:25:35.146308   67489 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:25:35.146467   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146474   67489 out.go:358] Setting ErrFile to fd 2...
	I1028 18:25:35.146480   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146973   67489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:25:35.147825   67489 out.go:352] Setting JSON to false
	I1028 18:25:35.148718   67489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7678,"bootTime":1730132257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:25:35.148810   67489 start.go:139] virtualization: kvm guest
	I1028 18:25:35.150695   67489 out.go:177] * [default-k8s-diff-port-692033] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:25:35.151797   67489 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:25:35.151797   67489 notify.go:220] Checking for updates...
	I1028 18:25:35.154193   67489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:25:35.155491   67489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:25:35.156576   67489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:25:35.157619   67489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:25:35.158702   67489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:25:35.160202   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:25:35.160602   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.160658   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.175095   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I1028 18:25:35.175421   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.175848   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.175863   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.176187   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.176387   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.176667   67489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:25:35.177210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.177325   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.191270   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I1028 18:25:35.191687   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.192092   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.192114   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.192388   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.192551   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.222738   67489 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:25:35.223900   67489 start.go:297] selected driver: kvm2
	I1028 18:25:35.223910   67489 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.224018   67489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:25:35.224696   67489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.224770   67489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:25:35.238839   67489 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:25:35.239228   67489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:25:35.239258   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:25:35.239310   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:25:35.239360   67489 start.go:340] cluster config:
	{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.239480   67489 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.241175   67489 out.go:177] * Starting "default-k8s-diff-port-692033" primary control-plane node in "default-k8s-diff-port-692033" cluster
	I1028 18:25:37.248702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:35.242393   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:25:35.242423   67489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:25:35.242432   67489 cache.go:56] Caching tarball of preloaded images
	I1028 18:25:35.242504   67489 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:25:35.242517   67489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:25:35.242600   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:25:35.242763   67489 start.go:360] acquireMachinesLock for default-k8s-diff-port-692033: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:25:40.320712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:46.400713   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:49.472709   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:55.552712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:58.624703   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:04.704707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:07.776740   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:13.856735   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:16.928744   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:23.008721   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:26.080668   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:32.160706   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:35.232663   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:41.312774   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:44.384739   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:50.464729   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:53.536702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:59.616750   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:02.688719   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:08.768731   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:11.840771   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:17.920756   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:20.992753   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:27.072785   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:30.144726   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:36.224704   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:39.296825   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:45.376692   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:48.448699   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:54.528707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:57.600754   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:28:00.605468   66801 start.go:364] duration metric: took 4m12.368996576s to acquireMachinesLock for "no-preload-051152"
	I1028 18:28:00.605517   66801 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:00.605525   66801 fix.go:54] fixHost starting: 
	I1028 18:28:00.605815   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:00.605850   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:00.621828   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1028 18:28:00.622237   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:00.622654   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:28:00.622674   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:00.622975   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:00.623150   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:00.623272   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:28:00.624880   66801 fix.go:112] recreateIfNeeded on no-preload-051152: state=Stopped err=<nil>
	I1028 18:28:00.624910   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	W1028 18:28:00.625076   66801 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:00.627065   66801 out.go:177] * Restarting existing kvm2 VM for "no-preload-051152" ...
	I1028 18:28:00.603089   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:00.603122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603425   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:28:00.603450   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603663   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:28:00.605343   66600 machine.go:96] duration metric: took 4m37.432159141s to provisionDockerMachine
	I1028 18:28:00.605380   66600 fix.go:56] duration metric: took 4m37.452432846s for fixHost
	I1028 18:28:00.605387   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 4m37.452449736s
	W1028 18:28:00.605419   66600 start.go:714] error starting host: provision: host is not running
	W1028 18:28:00.605517   66600 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 18:28:00.605528   66600 start.go:729] Will try again in 5 seconds ...
	I1028 18:28:00.628172   66801 main.go:141] libmachine: (no-preload-051152) Calling .Start
	I1028 18:28:00.628308   66801 main.go:141] libmachine: (no-preload-051152) Ensuring networks are active...
	I1028 18:28:00.629123   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network default is active
	I1028 18:28:00.629467   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network mk-no-preload-051152 is active
	I1028 18:28:00.629782   66801 main.go:141] libmachine: (no-preload-051152) Getting domain xml...
	I1028 18:28:00.630687   66801 main.go:141] libmachine: (no-preload-051152) Creating domain...
	I1028 18:28:01.819872   66801 main.go:141] libmachine: (no-preload-051152) Waiting to get IP...
	I1028 18:28:01.820792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:01.821214   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:01.821287   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:01.821204   68016 retry.go:31] will retry after 269.081621ms: waiting for machine to come up
	I1028 18:28:02.091799   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.092220   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.092242   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.092175   68016 retry.go:31] will retry after 341.926163ms: waiting for machine to come up
	I1028 18:28:02.435679   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.436035   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.436067   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.435982   68016 retry.go:31] will retry after 355.739166ms: waiting for machine to come up
	I1028 18:28:02.793549   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.793928   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.793953   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.793881   68016 retry.go:31] will retry after 496.396184ms: waiting for machine to come up
	I1028 18:28:05.607678   66600 start.go:360] acquireMachinesLock for embed-certs-021370: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:28:03.291568   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.292038   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.292068   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.291978   68016 retry.go:31] will retry after 561.311245ms: waiting for machine to come up
	I1028 18:28:03.854782   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.855137   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.855166   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.855088   68016 retry.go:31] will retry after 574.675969ms: waiting for machine to come up
	I1028 18:28:04.431784   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:04.432226   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:04.432250   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:04.432177   68016 retry.go:31] will retry after 1.028136295s: waiting for machine to come up
	I1028 18:28:05.461477   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:05.461839   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:05.461869   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:05.461795   68016 retry.go:31] will retry after 955.343831ms: waiting for machine to come up
	I1028 18:28:06.418161   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:06.418629   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:06.418659   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:06.418576   68016 retry.go:31] will retry after 1.615930502s: waiting for machine to come up
	I1028 18:28:08.036275   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:08.036641   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:08.036662   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:08.036615   68016 retry.go:31] will retry after 2.111463198s: waiting for machine to come up
	I1028 18:28:10.150891   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:10.151403   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:10.151429   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:10.151351   68016 retry.go:31] will retry after 2.35232289s: waiting for machine to come up
	I1028 18:28:12.506070   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:12.506471   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:12.506494   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:12.506447   68016 retry.go:31] will retry after 2.874687772s: waiting for machine to come up
	I1028 18:28:15.384360   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:15.384680   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:15.384712   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:15.384636   68016 retry.go:31] will retry after 3.299950406s: waiting for machine to come up
	I1028 18:28:19.893083   67149 start.go:364] duration metric: took 3m43.747535803s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:28:19.893161   67149 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:19.893170   67149 fix.go:54] fixHost starting: 
	I1028 18:28:19.893556   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:19.893608   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:19.909857   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I1028 18:28:19.910215   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:19.910669   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:28:19.910690   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:19.911049   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:19.911241   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:19.911395   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:28:19.912825   67149 fix.go:112] recreateIfNeeded on old-k8s-version-223868: state=Stopped err=<nil>
	I1028 18:28:19.912856   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	W1028 18:28:19.912996   67149 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:19.915041   67149 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	I1028 18:28:19.916422   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .Start
	I1028 18:28:19.916611   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:28:19.917295   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:28:19.917560   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:28:19.917951   67149 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:28:19.918628   67149 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:28:18.688243   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.688710   66801 main.go:141] libmachine: (no-preload-051152) Found IP for machine: 192.168.61.78
	I1028 18:28:18.688738   66801 main.go:141] libmachine: (no-preload-051152) Reserving static IP address...
	I1028 18:28:18.688754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has current primary IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.689151   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.689174   66801 main.go:141] libmachine: (no-preload-051152) Reserved static IP address: 192.168.61.78
	I1028 18:28:18.689188   66801 main.go:141] libmachine: (no-preload-051152) DBG | skip adding static IP to network mk-no-preload-051152 - found existing host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"}
	I1028 18:28:18.689198   66801 main.go:141] libmachine: (no-preload-051152) Waiting for SSH to be available...
	I1028 18:28:18.689217   66801 main.go:141] libmachine: (no-preload-051152) DBG | Getting to WaitForSSH function...
	I1028 18:28:18.691372   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691721   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.691754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691861   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH client type: external
	I1028 18:28:18.691890   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa (-rw-------)
	I1028 18:28:18.691950   66801 main.go:141] libmachine: (no-preload-051152) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:18.691967   66801 main.go:141] libmachine: (no-preload-051152) DBG | About to run SSH command:
	I1028 18:28:18.691979   66801 main.go:141] libmachine: (no-preload-051152) DBG | exit 0
	I1028 18:28:18.816169   66801 main.go:141] libmachine: (no-preload-051152) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:18.816571   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetConfigRaw
	I1028 18:28:18.817209   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:18.819569   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.819891   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.819913   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.820164   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:28:18.820375   66801 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:18.820392   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:18.820618   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.822580   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.822953   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.822983   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.823096   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.823250   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823390   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823537   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.823687   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.823878   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.823890   66801 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:18.932489   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:18.932516   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.932769   66801 buildroot.go:166] provisioning hostname "no-preload-051152"
	I1028 18:28:18.932798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.933003   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.935565   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.935938   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.935965   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.936147   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.936346   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936513   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936674   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.936838   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.936994   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.937006   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-051152 && echo "no-preload-051152" | sudo tee /etc/hostname
	I1028 18:28:19.057840   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-051152
	
	I1028 18:28:19.057872   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.060536   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.060917   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.060946   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.061068   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.061237   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061405   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061544   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.061700   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.061848   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.061863   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-051152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-051152/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-051152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:19.180890   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:19.180920   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:19.180957   66801 buildroot.go:174] setting up certificates
	I1028 18:28:19.180971   66801 provision.go:84] configureAuth start
	I1028 18:28:19.180985   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:19.181299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.183792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184144   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.184172   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184309   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.186298   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186588   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.186616   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186722   66801 provision.go:143] copyHostCerts
	I1028 18:28:19.186790   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:19.186804   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:19.186868   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:19.186974   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:19.186986   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:19.187023   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:19.187107   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:19.187115   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:19.187146   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:19.187197   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.no-preload-051152 san=[127.0.0.1 192.168.61.78 localhost minikube no-preload-051152]
	I1028 18:28:19.275109   66801 provision.go:177] copyRemoteCerts
	I1028 18:28:19.275175   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:19.275200   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.278392   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.278946   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.278978   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.279183   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.279454   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.279651   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.279789   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.362094   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:19.384635   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:28:19.406649   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:19.428807   66801 provision.go:87] duration metric: took 247.825267ms to configureAuth
	I1028 18:28:19.428830   66801 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:19.429026   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:28:19.429090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.431615   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.431928   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.431954   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.432090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.432278   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432434   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432602   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.432786   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.432932   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.432946   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:19.655137   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:19.655163   66801 machine.go:96] duration metric: took 834.775161ms to provisionDockerMachine
	I1028 18:28:19.655175   66801 start.go:293] postStartSetup for "no-preload-051152" (driver="kvm2")
	I1028 18:28:19.655185   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:19.655199   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.655509   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:19.655532   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.658099   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658411   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.658442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658566   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.658744   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.658884   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.659013   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.743030   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:19.746986   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:19.747007   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:19.747081   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:19.747177   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:19.747290   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:19.756378   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:19.779243   66801 start.go:296] duration metric: took 124.056855ms for postStartSetup
	I1028 18:28:19.779283   66801 fix.go:56] duration metric: took 19.173756385s for fixHost
	I1028 18:28:19.779305   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.781887   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782205   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.782226   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782367   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.782557   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782709   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782836   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.782999   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.783180   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.783191   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:19.892920   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140099.866892804
	
	I1028 18:28:19.892944   66801 fix.go:216] guest clock: 1730140099.866892804
	I1028 18:28:19.892954   66801 fix.go:229] Guest: 2024-10-28 18:28:19.866892804 +0000 UTC Remote: 2024-10-28 18:28:19.779287594 +0000 UTC m=+271.674302547 (delta=87.60521ms)
	I1028 18:28:19.892997   66801 fix.go:200] guest clock delta is within tolerance: 87.60521ms
	I1028 18:28:19.893008   66801 start.go:83] releasing machines lock for "no-preload-051152", held for 19.287505767s
	I1028 18:28:19.893034   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.893299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.895775   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896177   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.896204   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896362   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.896826   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897023   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897133   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:19.897171   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.897267   66801 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:19.897291   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.899703   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.899995   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900031   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900054   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900208   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900374   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900416   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900550   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.900626   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900707   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.900818   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900944   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.901098   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.982201   66801 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:20.008913   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:20.157816   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:20.165773   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:20.165837   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:20.187342   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:20.187359   66801 start.go:495] detecting cgroup driver to use...
	I1028 18:28:20.187423   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:20.204825   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:20.220702   66801 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:20.220776   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:20.238812   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:20.253664   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:20.363567   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:20.534475   66801 docker.go:233] disabling docker service ...
	I1028 18:28:20.534564   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:20.548424   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:20.564292   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:20.687135   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:20.796225   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:20.810327   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:20.828804   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:28:20.828866   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.838719   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:20.838768   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.849166   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.862811   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.875223   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:20.885402   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.895602   66801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.914163   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.924194   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:20.934907   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:20.934958   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:20.948898   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:20.958955   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:21.069438   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:21.175294   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:21.175379   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:21.179886   66801 start.go:563] Will wait 60s for crictl version
	I1028 18:28:21.179942   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.184195   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:21.226939   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:21.227043   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.254702   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.284607   66801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:28:21.285906   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:21.288560   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.288918   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:21.288945   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.289132   66801 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:21.293108   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:21.307303   66801 kubeadm.go:883] updating cluster {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:21.307447   66801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:28:21.307495   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:21.347493   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:28:21.347520   66801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:21.347595   66801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.347609   66801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.347621   66801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.347656   66801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.347690   66801 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 18:28:21.347691   66801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.347758   66801 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.347695   66801 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349312   66801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.349387   66801 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 18:28:21.349402   66801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.349526   66801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.349574   66801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.349582   66801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.349632   66801 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349311   66801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.515246   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.515760   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.543817   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 18:28:21.551755   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.562433   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.594208   66801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 18:28:21.594257   66801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.594291   66801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 18:28:21.594317   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.594323   66801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.594364   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.666046   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.666654   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.757831   66801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 18:28:21.757867   66801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.757867   66801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 18:28:21.757894   66801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.757914   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757926   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.757937   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757982   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.758142   66801 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 18:28:21.758161   66801 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 18:28:21.758197   66801 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.758169   66801 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.758234   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.758270   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.813746   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.813792   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.813836   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.813837   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.813840   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.813890   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.934434   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.958229   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.958287   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.958377   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.958381   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.958467   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.053179   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 18:28:22.053304   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.053351   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 18:28:22.053447   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:22.087756   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:22.087762   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:22.087826   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:22.087867   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.087897   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 18:28:22.087907   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087938   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087942   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 18:28:22.161136   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 18:28:22.161259   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:22.201924   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 18:28:22.201967   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 18:28:22.202032   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:22.202068   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:21.207941   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:28:21.209066   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.209518   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.209604   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.209495   68155 retry.go:31] will retry after 258.02952ms: waiting for machine to come up
	I1028 18:28:21.468599   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.469034   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.469052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.468996   68155 retry.go:31] will retry after 389.053264ms: waiting for machine to come up
	I1028 18:28:21.859493   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.859987   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.860017   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.859929   68155 retry.go:31] will retry after 454.438888ms: waiting for machine to come up
	I1028 18:28:22.315484   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.315961   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.315988   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.315904   68155 retry.go:31] will retry after 531.549561ms: waiting for machine to come up
	I1028 18:28:22.849247   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.849736   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.849791   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.849693   68155 retry.go:31] will retry after 602.202649ms: waiting for machine to come up
	I1028 18:28:23.453311   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:23.453859   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:23.453887   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:23.453796   68155 retry.go:31] will retry after 836.622626ms: waiting for machine to come up
	I1028 18:28:24.291959   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:24.292286   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:24.292315   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:24.292252   68155 retry.go:31] will retry after 1.187276744s: waiting for machine to come up
	I1028 18:28:25.480962   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:25.481398   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:25.481417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:25.481350   68155 retry.go:31] will retry after 1.417127806s: waiting for machine to come up
	I1028 18:28:23.586400   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.127903   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3: (2.040063682s)
	I1028 18:28:24.127962   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 18:28:24.127967   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (1.966690859s)
	I1028 18:28:24.127991   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 18:28:24.128010   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.925953727s)
	I1028 18:28:24.128034   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.925947261s)
	I1028 18:28:24.128041   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 18:28:24.128048   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 18:28:24.127904   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.03994028s)
	I1028 18:28:24.128069   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:24.128085   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 18:28:24.128109   66801 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 18:28:24.128123   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.128138   66801 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.128166   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:24.128180   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.132734   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 18:28:26.097200   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.9689964s)
	I1028 18:28:26.097240   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 18:28:26.097241   66801 ssh_runner.go:235] Completed: which crictl: (1.969052863s)
	I1028 18:28:26.097264   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.097308   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:26.097309   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.900944   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:26.901481   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:26.901511   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:26.901426   68155 retry.go:31] will retry after 1.766762252s: waiting for machine to come up
	I1028 18:28:28.670334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:28.670798   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:28.670827   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:28.670742   68155 retry.go:31] will retry after 2.287152926s: waiting for machine to come up
	I1028 18:28:30.959639   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:30.959947   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:30.959963   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:30.959917   68155 retry.go:31] will retry after 1.799223833s: waiting for machine to come up
	I1028 18:28:28.165293   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.067952153s)
	I1028 18:28:28.165410   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:28.165497   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.068111312s)
	I1028 18:28:28.165523   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 18:28:28.165548   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.165591   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.208189   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:30.152411   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.986796263s)
	I1028 18:28:30.152458   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 18:28:30.152496   66801 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152504   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.944281988s)
	I1028 18:28:30.152550   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152556   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 18:28:30.152652   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:32.761498   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:32.761941   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:32.761968   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:32.761894   68155 retry.go:31] will retry after 2.231065891s: waiting for machine to come up
	I1028 18:28:34.994438   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:34.994902   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:34.994936   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:34.994847   68155 retry.go:31] will retry after 4.079794439s: waiting for machine to come up
	I1028 18:28:33.842059   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.689484833s)
	I1028 18:28:33.842109   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 18:28:33.842138   66801 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:33.842155   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.68947822s)
	I1028 18:28:33.842184   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 18:28:33.842206   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:35.714458   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.872222439s)
	I1028 18:28:35.714493   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 18:28:35.714521   66801 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:35.714567   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:36.568124   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 18:28:36.568177   66801 cache_images.go:123] Successfully loaded all cached images
	I1028 18:28:36.568185   66801 cache_images.go:92] duration metric: took 15.220649269s to LoadCachedImages
	I1028 18:28:36.568199   66801 kubeadm.go:934] updating node { 192.168.61.78 8443 v1.31.2 crio true true} ...
	I1028 18:28:36.568310   66801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-051152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:36.568383   66801 ssh_runner.go:195] Run: crio config
	I1028 18:28:36.613400   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:36.613425   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:36.613435   66801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:36.613454   66801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.78 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-051152 NodeName:no-preload-051152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:28:36.613596   66801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-051152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:36.613669   66801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:28:36.624493   66801 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:36.624553   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:36.633828   66801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 18:28:36.649661   66801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:36.665454   66801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1028 18:28:36.681280   66801 ssh_runner.go:195] Run: grep 192.168.61.78	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:36.685010   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:36.697177   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:36.823266   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:36.840346   66801 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152 for IP: 192.168.61.78
	I1028 18:28:36.840366   66801 certs.go:194] generating shared ca certs ...
	I1028 18:28:36.840380   66801 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:36.840538   66801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:36.840578   66801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:36.840586   66801 certs.go:256] generating profile certs ...
	I1028 18:28:36.840661   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.key
	I1028 18:28:36.840722   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key.262d982c
	I1028 18:28:36.840758   66801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key
	I1028 18:28:36.840859   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:36.840892   66801 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:36.840902   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:36.840922   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:36.840943   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:36.840971   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:36.841025   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:36.841818   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:36.881548   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:36.907084   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:36.947810   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:36.976268   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 18:28:37.003795   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:28:37.036252   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:37.059731   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:28:37.083467   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:37.106397   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:37.128719   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:37.151133   66801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:37.166917   66801 ssh_runner.go:195] Run: openssl version
	I1028 18:28:37.172387   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:37.182117   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186329   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186389   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.191925   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:37.201799   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:37.211620   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215889   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215923   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.221588   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:37.231983   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:37.242291   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246869   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246904   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.252408   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:37.262946   66801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:37.267334   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:37.273164   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:37.278831   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:37.284778   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:37.290547   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:37.296195   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:37.301915   66801 kubeadm.go:392] StartCluster: {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:37.301986   66801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:37.302037   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.345115   66801 cri.go:89] found id: ""
	I1028 18:28:37.345185   66801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:37.355312   66801 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:37.355328   66801 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:37.355370   66801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:37.364777   66801 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:37.366056   66801 kubeconfig.go:125] found "no-preload-051152" server: "https://192.168.61.78:8443"
	I1028 18:28:37.368829   66801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:37.378010   66801 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.78
	I1028 18:28:37.378039   66801 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:37.378047   66801 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:37.378083   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.413442   66801 cri.go:89] found id: ""
	I1028 18:28:37.413522   66801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:37.428998   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:37.438365   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:37.438391   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:37.438442   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:37.447260   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:37.447310   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:37.456615   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:37.465292   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:37.465351   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:37.474382   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.482957   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:37.483012   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.491991   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:37.500635   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:37.500709   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:37.509632   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:37.518808   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:37.642796   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:40.421350   67489 start.go:364] duration metric: took 3m5.178550845s to acquireMachinesLock for "default-k8s-diff-port-692033"
	I1028 18:28:40.421416   67489 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:40.421430   67489 fix.go:54] fixHost starting: 
	I1028 18:28:40.421843   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:40.421894   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:40.439583   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I1028 18:28:40.440133   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:40.440679   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:28:40.440701   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:40.441025   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:40.441198   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:40.441359   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:28:40.443029   67489 fix.go:112] recreateIfNeeded on default-k8s-diff-port-692033: state=Stopped err=<nil>
	I1028 18:28:40.443055   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	W1028 18:28:40.443202   67489 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:40.445489   67489 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-692033" ...
	I1028 18:28:39.079052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079556   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079584   67149 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:28:39.079593   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:28:39.079888   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.079919   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | skip adding static IP to network mk-old-k8s-version-223868 - found existing host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"}
	I1028 18:28:39.079935   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:28:39.079955   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:28:39.079971   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:28:39.082041   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.082354   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082480   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:28:39.082500   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:28:39.082528   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:39.082555   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:28:39.082567   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:28:39.204523   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:39.204883   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:28:39.205526   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.208073   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208434   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.208478   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208709   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:28:39.208907   67149 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:39.208926   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:39.209144   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.211109   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211407   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.211437   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.211739   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.211888   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.212033   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.212218   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.212388   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.212398   67149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:39.316528   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:39.316566   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.316813   67149 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:28:39.316841   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.317028   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.319389   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319687   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.319713   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319836   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.320017   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320167   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320310   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.320458   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.320642   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.320656   67149 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:28:39.439149   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:28:39.439179   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.441957   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442268   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.442300   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442528   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.442736   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.442940   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.443122   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.443304   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.443525   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.443550   67149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:39.561619   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:39.561651   67149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:39.561702   67149 buildroot.go:174] setting up certificates
	I1028 18:28:39.561716   67149 provision.go:84] configureAuth start
	I1028 18:28:39.561731   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.562015   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.564838   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565195   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.565229   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565373   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.567875   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568262   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.568287   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568452   67149 provision.go:143] copyHostCerts
	I1028 18:28:39.568534   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:39.568553   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:39.568621   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:39.568745   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:39.568768   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:39.568810   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:39.568899   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:39.568911   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:39.568937   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:39.569006   67149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:28:39.786398   67149 provision.go:177] copyRemoteCerts
	I1028 18:28:39.786449   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:39.786482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.789048   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789373   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.789417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789535   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.789733   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.789884   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.790013   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:39.871816   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:39.902889   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:28:39.932633   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:39.958581   67149 provision.go:87] duration metric: took 396.851161ms to configureAuth
	I1028 18:28:39.958609   67149 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:39.958796   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:28:39.958881   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.961667   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962019   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.962044   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962240   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.962468   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962671   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962850   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.963037   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.963220   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.963239   67149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:40.179808   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:40.179843   67149 machine.go:96] duration metric: took 970.91659ms to provisionDockerMachine
	I1028 18:28:40.179857   67149 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:28:40.179869   67149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:40.179917   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.180287   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:40.180319   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.183011   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183383   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.183411   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183578   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.183770   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.183964   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.184114   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.270445   67149 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:40.275798   67149 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:40.275825   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:40.275898   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:40.275995   67149 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:40.276108   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:40.287529   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:40.310860   67149 start.go:296] duration metric: took 130.989944ms for postStartSetup
	I1028 18:28:40.310899   67149 fix.go:56] duration metric: took 20.417730265s for fixHost
	I1028 18:28:40.310925   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.313613   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.313967   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.314000   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.314175   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.314354   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314518   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314692   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.314862   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:40.315021   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:40.315032   67149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:40.421204   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140120.384024791
	
	I1028 18:28:40.421225   67149 fix.go:216] guest clock: 1730140120.384024791
	I1028 18:28:40.421235   67149 fix.go:229] Guest: 2024-10-28 18:28:40.384024791 +0000 UTC Remote: 2024-10-28 18:28:40.310903937 +0000 UTC m=+244.300202669 (delta=73.120854ms)
	I1028 18:28:40.421259   67149 fix.go:200] guest clock delta is within tolerance: 73.120854ms
	I1028 18:28:40.421265   67149 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 20.528130845s
	I1028 18:28:40.421297   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.421574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:40.424700   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425088   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.425116   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425286   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.425971   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426188   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426266   67149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:40.426340   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.426604   67149 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:40.426632   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.429408   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429569   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429807   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.429841   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429950   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430059   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.430092   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.430123   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430236   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430383   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430459   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430616   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.430614   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.509203   67149 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:40.540019   67149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:40.701732   67149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:40.710264   67149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:40.710354   67149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:40.731373   67149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:40.731398   67149 start.go:495] detecting cgroup driver to use...
	I1028 18:28:40.731465   67149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:40.751312   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:40.766288   67149 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:40.766399   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:40.783995   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:40.800295   67149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:40.940688   67149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:41.101493   67149 docker.go:233] disabling docker service ...
	I1028 18:28:41.101562   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:41.123350   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:41.141744   67149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:41.279020   67149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:41.414748   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:41.429469   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:41.448611   67149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:28:41.448669   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.460766   67149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:41.460842   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.473021   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.485888   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.497498   67149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:41.509250   67149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:41.519701   67149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:41.519754   67149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:41.534596   67149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:41.544814   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:41.681203   67149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:41.786879   67149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:41.786957   67149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:41.791981   67149 start.go:563] Will wait 60s for crictl version
	I1028 18:28:41.792041   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:41.796034   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:41.839867   67149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:41.839958   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.873029   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.904534   67149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:28:38.508232   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.720400   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.784720   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.892007   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:38.892083   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.392953   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.892228   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.912702   66801 api_server.go:72] duration metric: took 1.020696043s to wait for apiserver process to appear ...
	I1028 18:28:39.912728   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:28:39.912749   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:39.913221   66801 api_server.go:269] stopped: https://192.168.61.78:8443/healthz: Get "https://192.168.61.78:8443/healthz": dial tcp 192.168.61.78:8443: connect: connection refused
	I1028 18:28:40.413025   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:40.446984   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Start
	I1028 18:28:40.447191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring networks are active...
	I1028 18:28:40.447998   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network default is active
	I1028 18:28:40.448350   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network mk-default-k8s-diff-port-692033 is active
	I1028 18:28:40.448884   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Getting domain xml...
	I1028 18:28:40.449664   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Creating domain...
	I1028 18:28:41.740010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting to get IP...
	I1028 18:28:41.740827   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741273   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:41.741192   68341 retry.go:31] will retry after 276.06097ms: waiting for machine to come up
	I1028 18:28:42.018700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019135   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019159   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.019089   68341 retry.go:31] will retry after 318.252876ms: waiting for machine to come up
	I1028 18:28:42.338630   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339287   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339312   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.339205   68341 retry.go:31] will retry after 428.196122ms: waiting for machine to come up
	I1028 18:28:42.768656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769225   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769248   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.769134   68341 retry.go:31] will retry after 483.256928ms: waiting for machine to come up
	I1028 18:28:43.253739   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254304   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254353   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.254220   68341 retry.go:31] will retry after 577.932805ms: waiting for machine to come up
	I1028 18:28:43.834355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.834976   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.835021   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.834945   68341 retry.go:31] will retry after 639.531065ms: waiting for machine to come up
	I1028 18:28:44.475727   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476299   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476331   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:44.476248   68341 retry.go:31] will retry after 1.171398436s: waiting for machine to come up
	I1028 18:28:43.473059   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.473096   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.473113   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.588338   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.588371   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.913612   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.918557   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:43.918598   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.412902   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.425930   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.425971   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.913482   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.926092   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.926126   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:45.413673   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:45.419384   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:28:45.430384   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:28:45.430431   66801 api_server.go:131] duration metric: took 5.517694037s to wait for apiserver health ...
	I1028 18:28:45.430442   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:45.430450   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:45.432587   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:28:41.906005   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:41.909278   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909683   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:41.909741   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909996   67149 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:41.915405   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:41.931747   67149 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:41.931886   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:28:41.931944   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:41.987909   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:41.987966   67149 ssh_runner.go:195] Run: which lz4
	I1028 18:28:41.993527   67149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:28:41.998982   67149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:28:41.999014   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:28:43.643480   67149 crio.go:462] duration metric: took 1.649982959s to copy over tarball
	I1028 18:28:43.643559   67149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:28:45.433946   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:28:45.453114   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:28:45.479255   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:28:45.497020   66801 system_pods.go:59] 8 kube-system pods found
	I1028 18:28:45.497072   66801 system_pods.go:61] "coredns-7c65d6cfc9-74b6t" [b6a550da-7c40-4283-b49e-1ab29e652037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:28:45.497084   66801 system_pods.go:61] "etcd-no-preload-051152" [d5b31ded-95ce-4dde-ba88-e653dfdb8d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:28:45.497097   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [95d0acb0-4d58-4307-9f4f-10f920ff4745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:28:45.497105   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [722530e1-1d76-40dc-8a24-fe79d0167835] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:28:45.497112   66801 system_pods.go:61] "kube-proxy-kg42f" [7891354b-a501-45c4-b15c-cf6d29e3721f] Running
	I1028 18:28:45.497121   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [c658808c-79c2-4b8e-b72c-0b2d8e058ab4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:28:45.497130   66801 system_pods.go:61] "metrics-server-6867b74b74-vgd8k" [626b71a2-6904-409f-9274-6963a94e6ac2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:28:45.497137   66801 system_pods.go:61] "storage-provisioner" [39bf84c9-9c6f-4048-8a11-460fb12f622b] Running
	I1028 18:28:45.497146   66801 system_pods.go:74] duration metric: took 17.863894ms to wait for pod list to return data ...
	I1028 18:28:45.497160   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:28:45.501945   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:28:45.501977   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:28:45.501993   66801 node_conditions.go:105] duration metric: took 4.827279ms to run NodePressure ...
	I1028 18:28:45.502014   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:45.835429   66801 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840823   66801 kubeadm.go:739] kubelet initialised
	I1028 18:28:45.840852   66801 kubeadm.go:740] duration metric: took 5.391212ms waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840862   66801 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:28:45.846565   66801 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:45.648994   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649559   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649587   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:45.649512   68341 retry.go:31] will retry after 1.258585317s: waiting for machine to come up
	I1028 18:28:46.909541   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909955   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:46.909911   68341 retry.go:31] will retry after 1.827150306s: waiting for machine to come up
	I1028 18:28:48.738193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738696   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738725   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:48.738653   68341 retry.go:31] will retry after 1.738249889s: waiting for machine to come up
	I1028 18:28:46.758767   67149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115173801s)
	I1028 18:28:46.758810   67149 crio.go:469] duration metric: took 3.115300284s to extract the tarball
	I1028 18:28:46.758821   67149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:28:46.816906   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:46.864347   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:46.864376   67149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:46.864499   67149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.864564   67149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.864623   67149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.864639   67149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.864674   67149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.864686   67149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:28:46.864710   67149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.864529   67149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:46.866383   67149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.866445   67149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.866493   67149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.866579   67149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.866795   67149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.867073   67149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.867095   67149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:28:46.867488   67149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.043358   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.053844   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.055684   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.056812   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.066211   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.090931   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.104900   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:28:47.141214   67149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:28:47.141260   67149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.141307   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202804   67149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:28:47.202863   67149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.202873   67149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:28:47.202903   67149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.202915   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202944   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.234811   67149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:28:47.234853   67149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.234900   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.236717   67149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:28:47.236751   67149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.236798   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.243872   67149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:28:47.243918   67149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.243971   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260210   67149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:28:47.260253   67149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:28:47.260256   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.260293   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260398   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.260438   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.260456   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.260517   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.260559   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413617   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.413776   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.413804   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413825   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.414063   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.414103   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.414150   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.544933   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.581577   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.582079   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.582161   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.582206   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.582344   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.582819   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.662237   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:28:47.736212   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.739757   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:28:47.739928   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:28:47.739802   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:28:47.739812   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:28:47.739841   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:28:47.783578   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:28:49.121698   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:49.266583   67149 cache_images.go:92] duration metric: took 2.402188013s to LoadCachedImages
	W1028 18:28:49.266686   67149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 18:28:49.266702   67149 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:28:49.266828   67149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:49.266918   67149 ssh_runner.go:195] Run: crio config
	I1028 18:28:49.318146   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:28:49.318167   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:49.318176   67149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:49.318193   67149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:28:49.318310   67149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:49.318371   67149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:28:49.329249   67149 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:49.329339   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:49.339379   67149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:28:49.359216   67149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:49.378289   67149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:28:49.397766   67149 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:49.401788   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:49.418204   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:49.558031   67149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:49.575443   67149 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:28:49.575469   67149 certs.go:194] generating shared ca certs ...
	I1028 18:28:49.575489   67149 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:49.575693   67149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:49.575746   67149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:49.575756   67149 certs.go:256] generating profile certs ...
	I1028 18:28:49.575859   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:28:49.575914   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:28:49.575951   67149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:28:49.576058   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:49.576092   67149 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:49.576103   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:49.576131   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:49.576162   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:49.576186   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:49.576238   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:49.576994   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:49.622814   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:49.653690   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:49.678975   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:49.707340   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:28:49.744836   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:28:49.776367   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:49.818999   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:28:49.847531   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:49.871924   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:49.897751   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:49.923267   67149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:49.939805   67149 ssh_runner.go:195] Run: openssl version
	I1028 18:28:49.945611   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:49.956191   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960862   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960916   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.966701   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:49.977882   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:49.990873   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995751   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995810   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:50.001891   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:50.013508   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:50.028132   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034144   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034217   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.041768   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:50.054079   67149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:50.058983   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:50.064802   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:50.070790   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:50.077090   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:50.083149   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:50.089232   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:50.095205   67149 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:50.095338   67149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:50.095411   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.139777   67149 cri.go:89] found id: ""
	I1028 18:28:50.139854   67149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:50.151967   67149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:50.151986   67149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:50.152040   67149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:50.163454   67149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:50.164876   67149 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:28:50.165798   67149 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-223868" cluster setting kubeconfig missing "old-k8s-version-223868" context setting]
	I1028 18:28:50.167121   67149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:50.169545   67149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:50.179447   67149 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.194
	I1028 18:28:50.179477   67149 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:50.179490   67149 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:50.179542   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.213891   67149 cri.go:89] found id: ""
	I1028 18:28:50.213963   67149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:50.231491   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:50.241752   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:50.241775   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:50.241829   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:50.252015   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:50.252075   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:50.263032   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:50.273500   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:50.273564   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:50.283603   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.293521   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:50.293567   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.303701   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:50.316202   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:50.316269   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:50.327841   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:50.341366   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:50.469586   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:49.414188   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:51.855115   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:50.478658   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479208   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479237   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:50.479151   68341 retry.go:31] will retry after 2.362711935s: waiting for machine to come up
	I1028 18:28:52.842907   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843290   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843314   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:52.843250   68341 retry.go:31] will retry after 2.561710525s: waiting for machine to come up
	I1028 18:28:51.507608   67149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037983659s)
	I1028 18:28:51.507645   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.733141   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.842228   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.947336   67149 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:51.947430   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.447618   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.947814   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.448476   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.947571   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.448371   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.947700   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.447735   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.948435   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.857886   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:54.862972   66801 pod_ready.go:93] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:54.863005   66801 pod_ready.go:82] duration metric: took 9.016413449s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:54.863019   66801 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869043   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:55.869076   66801 pod_ready.go:82] duration metric: took 1.006049217s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869091   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874842   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.874865   66801 pod_ready.go:82] duration metric: took 2.005766936s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874875   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878913   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.878930   66801 pod_ready.go:82] duration metric: took 4.049698ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878937   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889897   66801 pod_ready.go:93] pod "kube-proxy-kg42f" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.889913   66801 pod_ready.go:82] duration metric: took 10.971269ms for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889921   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.407934   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408336   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408362   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:55.408274   68341 retry.go:31] will retry after 3.762790995s: waiting for machine to come up
	I1028 18:28:59.173489   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173900   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Found IP for machine: 192.168.39.215
	I1028 18:28:59.173923   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has current primary IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173929   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserving static IP address...
	I1028 18:28:59.174320   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.174343   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | skip adding static IP to network mk-default-k8s-diff-port-692033 - found existing host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"}
	I1028 18:28:59.174355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserved static IP address: 192.168.39.215
	I1028 18:28:59.174365   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for SSH to be available...
	I1028 18:28:59.174376   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Getting to WaitForSSH function...
	I1028 18:28:59.176441   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176755   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.176786   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176913   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH client type: external
	I1028 18:28:59.176936   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa (-rw-------)
	I1028 18:28:59.176958   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:59.176970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | About to run SSH command:
	I1028 18:28:59.176982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | exit 0
	I1028 18:28:59.300272   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:59.300649   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetConfigRaw
	I1028 18:28:59.301261   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.303505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.303832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.303857   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.304080   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:28:59.304287   67489 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:59.304310   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:59.304535   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.306713   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307008   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.307042   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307187   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.307348   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307627   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.307768   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.307936   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.307946   67489 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:59.412710   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:59.412743   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413009   67489 buildroot.go:166] provisioning hostname "default-k8s-diff-port-692033"
	I1028 18:28:59.413041   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.415772   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416048   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.416070   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416251   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.416437   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416728   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.416847   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.417030   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.417041   67489 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-692033 && echo "default-k8s-diff-port-692033" | sudo tee /etc/hostname
	I1028 18:28:59.538491   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-692033
	
	I1028 18:28:59.538518   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.540842   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541144   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.541173   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.541527   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541684   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541815   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.541964   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.542123   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.542138   67489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-692033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-692033/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-692033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:59.657448   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:59.657480   67489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:59.657524   67489 buildroot.go:174] setting up certificates
	I1028 18:28:59.657539   67489 provision.go:84] configureAuth start
	I1028 18:28:59.657556   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.657832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.660465   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660797   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.660840   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660949   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.663393   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663801   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.663830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663977   67489 provision.go:143] copyHostCerts
	I1028 18:28:59.664049   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:59.664062   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:59.664117   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:59.664217   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:59.664228   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:59.664250   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:59.664300   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:59.664308   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:59.664327   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:59.664403   67489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-692033 san=[127.0.0.1 192.168.39.215 default-k8s-diff-port-692033 localhost minikube]
	I1028 18:28:59.882619   67489 provision.go:177] copyRemoteCerts
	I1028 18:28:59.882672   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:59.882695   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.885303   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.885686   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885927   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.886121   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.886278   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.886382   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:28:59.975231   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:00.000412   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 18:29:00.024424   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:29:00.048646   67489 provision.go:87] duration metric: took 391.090444ms to configureAuth
	I1028 18:29:00.048674   67489 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:00.048884   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:00.048970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.051793   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052156   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.052185   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.052532   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052729   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052894   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.053080   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.053241   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.053254   67489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:00.525285   66600 start.go:364] duration metric: took 54.917560334s to acquireMachinesLock for "embed-certs-021370"
	I1028 18:29:00.525349   66600 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:29:00.525359   66600 fix.go:54] fixHost starting: 
	I1028 18:29:00.525740   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:29:00.525778   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:29:00.544614   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I1028 18:29:00.544976   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:29:00.545433   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:29:00.545455   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:29:00.545842   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:29:00.546046   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:00.546230   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:29:00.547770   66600 fix.go:112] recreateIfNeeded on embed-certs-021370: state=Stopped err=<nil>
	I1028 18:29:00.547794   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	W1028 18:29:00.547957   66600 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:29:00.549753   66600 out.go:177] * Restarting existing kvm2 VM for "embed-certs-021370" ...
	I1028 18:28:56.447531   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.947711   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.447782   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.947642   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.948256   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.447558   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.948018   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.448186   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.947565   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.280618   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:00.280641   67489 machine.go:96] duration metric: took 976.341252ms to provisionDockerMachine
	I1028 18:29:00.280653   67489 start.go:293] postStartSetup for "default-k8s-diff-port-692033" (driver="kvm2")
	I1028 18:29:00.280669   67489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:00.280690   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.281004   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:00.281044   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.283656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.283977   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.284010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.284170   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.284382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.284549   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.284692   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.372947   67489 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:00.377456   67489 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:00.377480   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:00.377547   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:00.377646   67489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:00.377762   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:00.388767   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:00.413520   67489 start.go:296] duration metric: took 132.852709ms for postStartSetup
	I1028 18:29:00.413557   67489 fix.go:56] duration metric: took 19.992127182s for fixHost
	I1028 18:29:00.413578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.416040   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416377   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.416405   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416553   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.416756   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.416930   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.417065   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.417228   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.417412   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.417424   67489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:00.525082   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140140.492840769
	
	I1028 18:29:00.525105   67489 fix.go:216] guest clock: 1730140140.492840769
	I1028 18:29:00.525114   67489 fix.go:229] Guest: 2024-10-28 18:29:00.492840769 +0000 UTC Remote: 2024-10-28 18:29:00.413561948 +0000 UTC m=+205.301669628 (delta=79.278821ms)
	I1028 18:29:00.525169   67489 fix.go:200] guest clock delta is within tolerance: 79.278821ms
	I1028 18:29:00.525180   67489 start.go:83] releasing machines lock for "default-k8s-diff-port-692033", held for 20.103791447s
	I1028 18:29:00.525214   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.525495   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:00.528023   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528385   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.528415   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529038   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529287   67489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:00.529323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.529380   67489 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:00.529403   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.531822   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532022   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532163   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532294   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532443   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532481   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532488   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532612   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532680   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.532830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532830   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.532965   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.533103   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.609362   67489 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:00.636444   67489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:00.785916   67489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:00.792198   67489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:00.792279   67489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:00.812095   67489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:00.812124   67489 start.go:495] detecting cgroup driver to use...
	I1028 18:29:00.812190   67489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:00.829536   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:00.844021   67489 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:00.844090   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:00.858561   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:00.873128   67489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:00.990494   67489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:01.148650   67489 docker.go:233] disabling docker service ...
	I1028 18:29:01.148729   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:01.162487   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:01.177407   67489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:01.303665   67489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:01.430019   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:01.443822   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:01.462768   67489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:01.462830   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.473669   67489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:01.473737   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.484364   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.496220   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.507216   67489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:01.518848   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.534216   67489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.554294   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.565095   67489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:01.574547   67489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:01.574614   67489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:01.596531   67489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:01.606858   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:01.740272   67489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:01.844969   67489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:01.845053   67489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:01.850004   67489 start.go:563] Will wait 60s for crictl version
	I1028 18:29:01.850056   67489 ssh_runner.go:195] Run: which crictl
	I1028 18:29:01.854032   67489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:01.893281   67489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:01.893367   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.923557   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.956282   67489 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:00.551001   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Start
	I1028 18:29:00.551172   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring networks are active...
	I1028 18:29:00.551820   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network default is active
	I1028 18:29:00.552130   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network mk-embed-certs-021370 is active
	I1028 18:29:00.552482   66600 main.go:141] libmachine: (embed-certs-021370) Getting domain xml...
	I1028 18:29:00.553186   66600 main.go:141] libmachine: (embed-certs-021370) Creating domain...
	I1028 18:29:01.830016   66600 main.go:141] libmachine: (embed-certs-021370) Waiting to get IP...
	I1028 18:29:01.831046   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:01.831522   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:01.831630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:01.831518   68528 retry.go:31] will retry after 300.306268ms: waiting for machine to come up
	I1028 18:29:02.132901   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.133350   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.133383   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.133293   68528 retry.go:31] will retry after 383.232008ms: waiting for machine to come up
	I1028 18:29:02.518736   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.519274   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.519299   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.519241   68528 retry.go:31] will retry after 354.591942ms: waiting for machine to come up
	I1028 18:29:02.875813   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.876360   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.876397   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.876325   68528 retry.go:31] will retry after 529.444037ms: waiting for machine to come up
	I1028 18:28:58.895888   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:58.895918   66801 pod_ready.go:82] duration metric: took 1.005990705s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:58.895932   66801 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:00.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:02.903390   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:01.957748   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:01.960967   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:01.961382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961635   67489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:01.966300   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:01.979786   67489 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:01.979899   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:01.979957   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:02.020659   67489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:02.020716   67489 ssh_runner.go:195] Run: which lz4
	I1028 18:29:02.024772   67489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:02.030183   67489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:02.030206   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:03.449423   67489 crio.go:462] duration metric: took 1.424673911s to copy over tarball
	I1028 18:29:03.449498   67489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:01.447557   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.947946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.448522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.947533   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.447522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.948025   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.448136   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.948157   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.447635   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.947987   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.407835   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:03.408366   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:03.408390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:03.408265   68528 retry.go:31] will retry after 680.005296ms: waiting for machine to come up
	I1028 18:29:04.089802   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.090390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.090409   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.090338   68528 retry.go:31] will retry after 833.681725ms: waiting for machine to come up
	I1028 18:29:04.925788   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.926278   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.926298   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.926227   68528 retry.go:31] will retry after 1.050194845s: waiting for machine to come up
	I1028 18:29:05.978270   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:05.978715   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:05.978742   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:05.978669   68528 retry.go:31] will retry after 1.416773018s: waiting for machine to come up
	I1028 18:29:07.397367   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:07.397843   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:07.397876   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:07.397787   68528 retry.go:31] will retry after 1.621623459s: waiting for machine to come up
	I1028 18:29:04.903465   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:06.903931   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:05.622217   67489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172685001s)
	I1028 18:29:05.622253   67489 crio.go:469] duration metric: took 2.172801769s to extract the tarball
	I1028 18:29:05.622264   67489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:05.660585   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:05.705484   67489 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:05.705510   67489 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:05.705520   67489 kubeadm.go:934] updating node { 192.168.39.215 8444 v1.31.2 crio true true} ...
	I1028 18:29:05.705634   67489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-692033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:05.705725   67489 ssh_runner.go:195] Run: crio config
	I1028 18:29:05.760618   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:05.760649   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:05.760661   67489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:05.760690   67489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-692033 NodeName:default-k8s-diff-port-692033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:05.760858   67489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-692033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:05.760936   67489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:05.771392   67489 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:05.771464   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:05.780926   67489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1028 18:29:05.797951   67489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:05.814159   67489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1028 18:29:05.830723   67489 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:05.835163   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:05.847192   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:05.972201   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:05.990475   67489 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033 for IP: 192.168.39.215
	I1028 18:29:05.990492   67489 certs.go:194] generating shared ca certs ...
	I1028 18:29:05.990511   67489 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:05.990711   67489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:05.990764   67489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:05.990776   67489 certs.go:256] generating profile certs ...
	I1028 18:29:05.990875   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.key
	I1028 18:29:05.990991   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key.81b9981a
	I1028 18:29:05.991052   67489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key
	I1028 18:29:05.991218   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:05.991268   67489 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:05.991283   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:05.991317   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:05.991359   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:05.991405   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:05.991481   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:05.992294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:06.033938   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:06.070407   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:06.115934   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:06.144600   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 18:29:06.169202   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:06.196294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:06.219384   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:29:06.242169   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:06.266506   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:06.290175   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:06.313006   67489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:06.329076   67489 ssh_runner.go:195] Run: openssl version
	I1028 18:29:06.335322   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:06.346021   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350401   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350464   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.356134   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:06.366765   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:06.377486   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381920   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381978   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.387492   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:06.398392   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:06.413238   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418376   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418429   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.423997   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:06.436170   67489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:06.440853   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:06.446851   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:06.452980   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:06.458973   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:06.465088   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:06.470776   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:06.476462   67489 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:06.476588   67489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:06.476638   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.519820   67489 cri.go:89] found id: ""
	I1028 18:29:06.519884   67489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:06.530091   67489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:06.530110   67489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:06.530171   67489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:06.539807   67489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:06.540946   67489 kubeconfig.go:125] found "default-k8s-diff-port-692033" server: "https://192.168.39.215:8444"
	I1028 18:29:06.543088   67489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:06.552354   67489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I1028 18:29:06.552379   67489 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:06.552389   67489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:06.552445   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.586545   67489 cri.go:89] found id: ""
	I1028 18:29:06.586611   67489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:06.603418   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:06.612856   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:06.612876   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:06.612921   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:29:06.621852   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:06.621900   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:06.631132   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:29:06.640088   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:06.640158   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:06.651007   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.660034   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:06.660104   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.669587   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:29:06.678863   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:06.678937   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:06.688820   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:06.698470   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:06.820432   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.030810   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210339958s)
	I1028 18:29:08.030839   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.255000   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.321500   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.412775   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:08.412854   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.913648   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.413011   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.459009   67489 api_server.go:72] duration metric: took 1.046232596s to wait for apiserver process to appear ...
	I1028 18:29:09.459041   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:09.459062   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:09.459626   67489 api_server.go:269] stopped: https://192.168.39.215:8444/healthz: Get "https://192.168.39.215:8444/healthz": dial tcp 192.168.39.215:8444: connect: connection refused
	I1028 18:29:09.960128   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:06.447581   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.947550   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.447977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.947491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.447960   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.947662   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.448201   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.947753   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.448116   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.948175   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.020419   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:09.020867   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:09.020899   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:09.020814   68528 retry.go:31] will retry after 2.2230034s: waiting for machine to come up
	I1028 18:29:11.245136   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:11.245630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:11.245657   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:11.245595   68528 retry.go:31] will retry after 2.153898764s: waiting for machine to come up
	I1028 18:29:09.403596   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:11.903702   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:12.135346   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.135381   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.135394   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.166207   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.166234   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.459631   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.473153   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.473183   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:12.959778   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.969281   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.969320   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:13.459913   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:13.464362   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:29:13.471925   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:13.471953   67489 api_server.go:131] duration metric: took 4.012904227s to wait for apiserver health ...
	I1028 18:29:13.471964   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:13.471971   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:13.473908   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:13.475283   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:13.487393   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:13.532627   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:13.544945   67489 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:13.544982   67489 system_pods.go:61] "coredns-7c65d6cfc9-ctx9z" [7067f349-3a22-468d-bd9d-19d057eb43f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:13.544993   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [313161ff-f30f-4e25-978d-9aa2eba7fc44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:13.545004   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [e9a66e8e-946b-4365-bd63-3adfdd75e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:13.545014   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [0e682f68-2f9a-4bf3-bbe4-3a6b1ef6778d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:13.545021   67489 system_pods.go:61] "kube-proxy-86rll" [d34f46c6-3227-40c9-ac97-066b98bfce32] Running
	I1028 18:29:13.545029   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [b9058969-31e2-4249-862f-ef5de7784adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:13.545043   67489 system_pods.go:61] "metrics-server-6867b74b74-dz4nl" [833c650e-5f5d-46a1-9ae1-64619c53a92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:13.545047   67489 system_pods.go:61] "storage-provisioner" [342db8fa-7873-47b0-a5a6-52cde2e19d47] Running
	I1028 18:29:13.545053   67489 system_pods.go:74] duration metric: took 12.403166ms to wait for pod list to return data ...
	I1028 18:29:13.545060   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:13.548591   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:13.548619   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:13.548632   67489 node_conditions.go:105] duration metric: took 3.567222ms to run NodePressure ...
	I1028 18:29:13.548649   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:13.818718   67489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826139   67489 kubeadm.go:739] kubelet initialised
	I1028 18:29:13.826161   67489 kubeadm.go:740] duration metric: took 7.415257ms waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826170   67489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:13.833418   67489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.838793   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838820   67489 pod_ready.go:82] duration metric: took 5.377698ms for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.838831   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838840   67489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.843172   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843195   67489 pod_ready.go:82] duration metric: took 4.34633ms for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.843203   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843209   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.847581   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847615   67489 pod_ready.go:82] duration metric: took 4.389898ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.847630   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847642   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:11.448521   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.947592   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.448427   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.948413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.448390   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.948518   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.447929   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.948106   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.948236   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.401547   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:13.402054   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:13.402083   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:13.402028   68528 retry.go:31] will retry after 2.345507901s: waiting for machine to come up
	I1028 18:29:15.749122   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:15.749485   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:15.749502   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:15.749451   68528 retry.go:31] will retry after 2.974576274s: waiting for machine to come up
	I1028 18:29:13.903930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.403934   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:15.858338   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:18.354245   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.447535   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.948117   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.448197   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.948491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.948393   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.448406   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.947788   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.448100   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.947907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.727508   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.727990   66600 main.go:141] libmachine: (embed-certs-021370) Found IP for machine: 192.168.50.62
	I1028 18:29:18.728011   66600 main.go:141] libmachine: (embed-certs-021370) Reserving static IP address...
	I1028 18:29:18.728028   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has current primary IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.728447   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.728478   66600 main.go:141] libmachine: (embed-certs-021370) Reserved static IP address: 192.168.50.62
	I1028 18:29:18.728497   66600 main.go:141] libmachine: (embed-certs-021370) DBG | skip adding static IP to network mk-embed-certs-021370 - found existing host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"}
	I1028 18:29:18.728510   66600 main.go:141] libmachine: (embed-certs-021370) Waiting for SSH to be available...
	I1028 18:29:18.728520   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Getting to WaitForSSH function...
	I1028 18:29:18.730574   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731031   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.731069   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731227   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH client type: external
	I1028 18:29:18.731248   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa (-rw-------)
	I1028 18:29:18.731282   66600 main.go:141] libmachine: (embed-certs-021370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:29:18.731310   66600 main.go:141] libmachine: (embed-certs-021370) DBG | About to run SSH command:
	I1028 18:29:18.731327   66600 main.go:141] libmachine: (embed-certs-021370) DBG | exit 0
	I1028 18:29:18.860213   66600 main.go:141] libmachine: (embed-certs-021370) DBG | SSH cmd err, output: <nil>: 
	I1028 18:29:18.860619   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetConfigRaw
	I1028 18:29:18.861235   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:18.863576   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.863932   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.863956   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.864224   66600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/config.json ...
	I1028 18:29:18.864465   66600 machine.go:93] provisionDockerMachine start ...
	I1028 18:29:18.864521   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:18.864720   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.866951   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867314   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.867349   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867511   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.867665   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867811   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867941   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.868072   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.868230   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.868239   66600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:29:18.972695   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:29:18.972729   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.972970   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:29:18.973000   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.973209   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.975608   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.975889   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.975915   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.976082   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.976269   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976401   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976505   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.976625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.976796   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.976809   66600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-021370 && echo "embed-certs-021370" | sudo tee /etc/hostname
	I1028 18:29:19.094622   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-021370
	
	I1028 18:29:19.094655   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.097110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097436   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.097460   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097639   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.097817   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.097967   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.098121   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.098309   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.098517   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.098533   66600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-021370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-021370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-021370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:29:19.218088   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:29:19.218112   66600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:29:19.218140   66600 buildroot.go:174] setting up certificates
	I1028 18:29:19.218150   66600 provision.go:84] configureAuth start
	I1028 18:29:19.218159   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:19.218411   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:19.221093   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221441   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.221469   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221641   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.223628   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.223908   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.223928   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.224085   66600 provision.go:143] copyHostCerts
	I1028 18:29:19.224155   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:29:19.224185   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:29:19.224252   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:29:19.224380   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:29:19.224390   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:29:19.224422   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:29:19.224532   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:29:19.224542   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:29:19.224570   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:29:19.224655   66600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-021370 san=[127.0.0.1 192.168.50.62 embed-certs-021370 localhost minikube]
	I1028 18:29:19.402860   66600 provision.go:177] copyRemoteCerts
	I1028 18:29:19.402925   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:29:19.402954   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.405556   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.405904   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.405939   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.406100   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.406265   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.406391   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.406494   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.486543   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:19.510790   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:29:19.534037   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:29:19.557509   66600 provision.go:87] duration metric: took 339.349044ms to configureAuth
	I1028 18:29:19.557531   66600 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:19.557681   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:19.557745   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.560240   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560594   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.560623   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560757   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.560931   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561110   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561320   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.561490   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.561651   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.561664   66600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:19.781270   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:19.781304   66600 machine.go:96] duration metric: took 916.814114ms to provisionDockerMachine
	I1028 18:29:19.781317   66600 start.go:293] postStartSetup for "embed-certs-021370" (driver="kvm2")
	I1028 18:29:19.781327   66600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:19.781345   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:19.781664   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:19.781690   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.784176   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784509   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.784538   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784667   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.784854   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.785028   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.785171   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.867396   66600 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:19.871516   66600 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:19.871542   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:19.871630   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:19.871717   66600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:19.871799   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:19.882017   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:19.906531   66600 start.go:296] duration metric: took 125.203636ms for postStartSetup
	I1028 18:29:19.906562   66600 fix.go:56] duration metric: took 19.381205641s for fixHost
	I1028 18:29:19.906581   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.909285   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909610   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.909640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909778   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.909980   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910311   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910444   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.910625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.910788   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.910803   66600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:20.017311   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140159.989127147
	
	I1028 18:29:20.017339   66600 fix.go:216] guest clock: 1730140159.989127147
	I1028 18:29:20.017346   66600 fix.go:229] Guest: 2024-10-28 18:29:19.989127147 +0000 UTC Remote: 2024-10-28 18:29:19.906566181 +0000 UTC m=+356.890524496 (delta=82.560966ms)
	I1028 18:29:20.017368   66600 fix.go:200] guest clock delta is within tolerance: 82.560966ms
	I1028 18:29:20.017374   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 19.492049852s
	I1028 18:29:20.017396   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.017657   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:20.020286   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020680   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.020704   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020816   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021307   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021491   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021577   66600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:20.021616   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.021746   66600 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:20.021767   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.024157   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024429   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024511   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024533   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024679   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.024856   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.024880   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024896   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.025019   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025070   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.025160   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.025201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.025304   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025443   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.101316   66600 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:20.124859   66600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:20.268773   66600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:20.275277   66600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:20.275358   66600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:20.291972   66600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:20.291999   66600 start.go:495] detecting cgroup driver to use...
	I1028 18:29:20.292066   66600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:20.311389   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:20.325385   66600 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:20.325434   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:20.339246   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:20.353759   66600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:20.477639   66600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:20.622752   66600 docker.go:233] disabling docker service ...
	I1028 18:29:20.622825   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:20.637258   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:20.650210   66600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:20.801036   66600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:20.945078   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:20.959494   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:20.977797   66600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:20.977854   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.987991   66600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:20.988038   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.998188   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.008502   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.018540   66600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:21.028663   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.038758   66600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.056298   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.067136   66600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:21.076859   66600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:21.076906   66600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:21.090468   66600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:21.099951   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:21.226675   66600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:21.321993   66600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:21.322074   66600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:21.327981   66600 start.go:563] Will wait 60s for crictl version
	I1028 18:29:21.328028   66600 ssh_runner.go:195] Run: which crictl
	I1028 18:29:21.331673   66600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:21.369066   66600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:21.369168   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.396873   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.426233   66600 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:21.427570   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:21.430207   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430560   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:21.430582   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430732   66600 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:21.435293   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:21.447885   66600 kubeadm.go:883] updating cluster {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:21.447989   66600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:21.448067   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:21.488401   66600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:21.488488   66600 ssh_runner.go:195] Run: which lz4
	I1028 18:29:21.492578   66600 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:21.496531   66600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:21.496560   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:22.824198   66600 crio.go:462] duration metric: took 1.331643546s to copy over tarball
	I1028 18:29:22.824276   66600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:18.902233   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.902721   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.904121   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.354850   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.355961   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:24.854445   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:21.447903   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.948305   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.448529   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.947708   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.447881   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.947572   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.448433   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.948299   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.447748   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.947863   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.906928   66600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082617931s)
	I1028 18:29:24.906959   66600 crio.go:469] duration metric: took 2.082732511s to extract the tarball
	I1028 18:29:24.906968   66600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:24.944094   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:24.991024   66600 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:24.991048   66600 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:24.991057   66600 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.31.2 crio true true} ...
	I1028 18:29:24.991175   66600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-021370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:24.991262   66600 ssh_runner.go:195] Run: crio config
	I1028 18:29:25.034609   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:25.034629   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:25.034639   66600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:25.034657   66600 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-021370 NodeName:embed-certs-021370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:25.034803   66600 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-021370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:25.034858   66600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:25.044587   66600 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:25.044661   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:25.054150   66600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 18:29:25.070100   66600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:25.085866   66600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1028 18:29:25.101932   66600 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:25.105817   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:25.117399   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:25.235698   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:25.251517   66600 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370 for IP: 192.168.50.62
	I1028 18:29:25.251536   66600 certs.go:194] generating shared ca certs ...
	I1028 18:29:25.251549   66600 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:25.251701   66600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:25.251758   66600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:25.251771   66600 certs.go:256] generating profile certs ...
	I1028 18:29:25.251871   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/client.key
	I1028 18:29:25.251951   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key.1a2ee1e7
	I1028 18:29:25.252010   66600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key
	I1028 18:29:25.252184   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:25.252213   66600 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:25.252222   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:25.252246   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:25.252271   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:25.252291   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:25.252328   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:25.252968   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:25.280370   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:25.323757   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:25.356813   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:25.395729   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 18:29:25.428768   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:25.459929   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:25.485206   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:29:25.514312   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:25.537007   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:25.559926   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:25.582419   66600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:25.599284   66600 ssh_runner.go:195] Run: openssl version
	I1028 18:29:25.605132   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:25.615576   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619856   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619911   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.625516   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:25.636185   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:25.646664   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650958   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650998   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.657176   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:25.668490   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:25.679608   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.683993   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.684041   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.689729   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:25.700817   66600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:25.705214   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:25.711351   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:25.717172   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:25.722879   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:25.728415   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:25.733859   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:25.739422   66600 kubeadm.go:392] StartCluster: {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:25.739492   66600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:25.739534   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.779869   66600 cri.go:89] found id: ""
	I1028 18:29:25.779926   66600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:25.790753   66600 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:25.790771   66600 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:25.790811   66600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:25.800588   66600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:25.801624   66600 kubeconfig.go:125] found "embed-certs-021370" server: "https://192.168.50.62:8443"
	I1028 18:29:25.803466   66600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:25.813212   66600 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I1028 18:29:25.813240   66600 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:25.813254   66600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:25.813312   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.848911   66600 cri.go:89] found id: ""
	I1028 18:29:25.848976   66600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:25.866165   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:25.876454   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:25.876485   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:25.876539   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:29:25.886746   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:25.886802   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:25.897486   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:29:25.907828   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:25.907881   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:25.917520   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.926896   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:25.926950   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.937184   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:29:25.946539   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:25.946585   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:25.956520   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:25.968541   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:26.077716   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.298743   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.220990469s)
	I1028 18:29:27.298777   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.517286   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.582890   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.648091   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:27.648159   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.402969   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:27.405049   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.356621   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.356642   67489 pod_ready.go:82] duration metric: took 12.508989427s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.356653   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361609   67489 pod_ready.go:93] pod "kube-proxy-86rll" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.361627   67489 pod_ready.go:82] duration metric: took 4.968039ms for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361635   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365430   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.365449   67489 pod_ready.go:82] duration metric: took 3.807327ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365460   67489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:28.373442   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.448386   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.948082   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.447496   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.948285   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.947683   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.447813   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.947810   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.448413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.947477   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.148668   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.648320   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.148392   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.648218   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.682858   66600 api_server.go:72] duration metric: took 2.034774456s to wait for apiserver process to appear ...
	I1028 18:29:29.682888   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:29.682915   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:29.683457   66600 api_server.go:269] stopped: https://192.168.50.62:8443/healthz: Get "https://192.168.50.62:8443/healthz": dial tcp 192.168.50.62:8443: connect: connection refused
	I1028 18:29:30.182997   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.878280   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.878304   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:32.878318   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.942789   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.942828   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:29.903158   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:32.404024   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.183344   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.187337   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.187362   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:33.683288   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.687653   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.687680   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:34.183190   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:34.187671   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:29:34.195909   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:34.195938   66600 api_server.go:131] duration metric: took 4.51303648s to wait for apiserver health ...
	I1028 18:29:34.195950   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:34.195959   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:34.197469   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:30.872450   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.372710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:31.448099   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.948269   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.447660   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.947559   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.447716   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.948569   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.447555   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.947612   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.448411   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.947786   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.198803   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:34.221645   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:34.250694   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:34.261167   66600 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:34.261211   66600 system_pods.go:61] "coredns-7c65d6cfc9-bdtd8" [e1fff57c-ba57-4592-9049-7cc80a6f67a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:34.261229   66600 system_pods.go:61] "etcd-embed-certs-021370" [0c805e30-b6d8-416c-97af-c33b142b46e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:34.261240   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [244e08f7-7e8c-4547-b145-9816374fe582] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:34.261251   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [c08dc68e-d441-4d96-8377-957c381c4ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:34.261265   66600 system_pods.go:61] "kube-proxy-7g7lr" [828a4297-7703-46a7-bffe-c8daf83ef4bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:29:34.261277   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [2bc3fea6-0f01-43e9-b69e-deb26980e658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:34.261286   66600 system_pods.go:61] "metrics-server-6867b74b74-gg8bl" [599d8cf3-717d-46b2-a5ba-43e00f46829b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:34.261296   66600 system_pods.go:61] "storage-provisioner" [ad047e20-2de9-447c-83bc-8b835292a25f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:29:34.261307   66600 system_pods.go:74] duration metric: took 10.589505ms to wait for pod list to return data ...
	I1028 18:29:34.261319   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:34.265041   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:34.265066   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:34.265079   66600 node_conditions.go:105] duration metric: took 3.75485ms to run NodePressure ...
	I1028 18:29:34.265098   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:34.567509   66600 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571573   66600 kubeadm.go:739] kubelet initialised
	I1028 18:29:34.571592   66600 kubeadm.go:740] duration metric: took 4.056877ms waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571599   66600 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:34.576872   66600 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:36.586357   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:34.901383   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.902526   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:35.871154   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:37.873138   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.447566   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.947886   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.448276   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.948547   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.447546   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.947974   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.448334   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.948183   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.448396   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.947620   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.083269   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.083414   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:41.083443   66600 pod_ready.go:82] duration metric: took 6.506548177s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:41.083453   66600 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:39.401480   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.402426   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:40.370529   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:42.371580   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:44.372259   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.448306   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.947486   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.448219   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.948295   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.447765   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.947468   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.448454   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.947488   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.447568   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.948070   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.089927   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.589484   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.594775   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:43.403246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.403595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.902160   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.872441   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.371650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.448123   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.948178   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.447989   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.947888   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.448230   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.947692   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.448090   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.947996   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.447949   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.947977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.089584   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.089607   66600 pod_ready.go:82] duration metric: took 7.006147079s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.089619   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093940   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.093959   66600 pod_ready.go:82] duration metric: took 4.332474ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093969   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098279   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.098295   66600 pod_ready.go:82] duration metric: took 4.319206ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098304   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102326   66600 pod_ready.go:93] pod "kube-proxy-7g7lr" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.102341   66600 pod_ready.go:82] duration metric: took 4.03162ms for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102349   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106249   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.106265   66600 pod_ready.go:82] duration metric: took 3.910208ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106279   66600 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:50.112678   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:52.113794   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.902296   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.902424   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.371741   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:53.371833   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.448130   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.948450   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:51.948545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:51.987428   67149 cri.go:89] found id: ""
	I1028 18:29:51.987459   67149 logs.go:282] 0 containers: []
	W1028 18:29:51.987470   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:51.987478   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:51.987534   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:52.021429   67149 cri.go:89] found id: ""
	I1028 18:29:52.021452   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.021460   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:52.021466   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:52.021509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:52.055338   67149 cri.go:89] found id: ""
	I1028 18:29:52.055362   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.055373   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:52.055380   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:52.055432   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:52.088673   67149 cri.go:89] found id: ""
	I1028 18:29:52.088697   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.088705   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:52.088711   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:52.088766   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:52.129833   67149 cri.go:89] found id: ""
	I1028 18:29:52.129854   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.129862   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:52.129867   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:52.129918   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:52.162994   67149 cri.go:89] found id: ""
	I1028 18:29:52.163029   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.163040   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:52.163047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:52.163105   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:52.196819   67149 cri.go:89] found id: ""
	I1028 18:29:52.196840   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.196848   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:52.196853   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:52.196906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:52.232924   67149 cri.go:89] found id: ""
	I1028 18:29:52.232955   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.232965   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:52.232977   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:52.232992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:52.283317   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:52.283353   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:52.296648   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:52.296673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:52.423396   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:52.423418   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:52.423429   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:52.497671   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:52.497704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:55.037920   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:55.052539   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:55.052602   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:55.089302   67149 cri.go:89] found id: ""
	I1028 18:29:55.089332   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.089343   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:55.089351   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:55.089404   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:55.127317   67149 cri.go:89] found id: ""
	I1028 18:29:55.127345   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.127352   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:55.127358   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:55.127413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:55.161689   67149 cri.go:89] found id: ""
	I1028 18:29:55.161714   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.161721   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:55.161727   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:55.161772   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:55.196494   67149 cri.go:89] found id: ""
	I1028 18:29:55.196521   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.196534   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:55.196542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:55.196596   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:55.234980   67149 cri.go:89] found id: ""
	I1028 18:29:55.235008   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.235020   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:55.235028   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:55.235086   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:55.274750   67149 cri.go:89] found id: ""
	I1028 18:29:55.274775   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.274783   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:55.274789   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:55.274842   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:55.309839   67149 cri.go:89] found id: ""
	I1028 18:29:55.309865   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.309874   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:55.309881   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:55.309943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:55.358765   67149 cri.go:89] found id: ""
	I1028 18:29:55.358793   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.358805   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:55.358816   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:55.358830   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:55.422821   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:55.422869   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:55.439458   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:55.439482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:55.507743   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:55.507764   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:55.507775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:55.582679   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:55.582710   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:54.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.612967   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:54.402722   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.902816   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:55.372539   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:57.871444   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:58.124907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:58.139125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:58.139181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:58.178829   67149 cri.go:89] found id: ""
	I1028 18:29:58.178853   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.178864   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:58.178871   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:58.178933   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:58.212290   67149 cri.go:89] found id: ""
	I1028 18:29:58.212320   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.212336   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:58.212344   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:58.212402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:58.246108   67149 cri.go:89] found id: ""
	I1028 18:29:58.246135   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.246145   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:58.246152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:58.246212   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:58.280625   67149 cri.go:89] found id: ""
	I1028 18:29:58.280651   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.280662   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:58.280670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:58.280727   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:58.318755   67149 cri.go:89] found id: ""
	I1028 18:29:58.318783   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.318793   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:58.318801   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:58.318853   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:58.356452   67149 cri.go:89] found id: ""
	I1028 18:29:58.356487   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.356499   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:58.356506   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:58.356564   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:58.389906   67149 cri.go:89] found id: ""
	I1028 18:29:58.389928   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.389936   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:58.389943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:58.390001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:58.425883   67149 cri.go:89] found id: ""
	I1028 18:29:58.425911   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.425920   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:58.425929   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:58.425943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:58.484392   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:58.484433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:58.498133   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:58.498159   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:58.572358   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:58.572382   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:58.572397   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:58.654963   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:58.654997   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:58.613408   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.614235   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:59.402355   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.403000   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.370479   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:02.370951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:04.372159   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.196593   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:01.209622   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:01.209693   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:01.243682   67149 cri.go:89] found id: ""
	I1028 18:30:01.243708   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.243718   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:01.243726   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:01.243786   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:01.277617   67149 cri.go:89] found id: ""
	I1028 18:30:01.277646   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.277654   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:01.277660   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:01.277710   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:01.314028   67149 cri.go:89] found id: ""
	I1028 18:30:01.314055   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.314067   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:01.314081   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:01.314152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:01.350324   67149 cri.go:89] found id: ""
	I1028 18:30:01.350348   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.350356   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:01.350362   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:01.350415   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:01.385802   67149 cri.go:89] found id: ""
	I1028 18:30:01.385826   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.385834   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:01.385840   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:01.385883   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:01.421507   67149 cri.go:89] found id: ""
	I1028 18:30:01.421534   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.421545   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:01.421553   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:01.421611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:01.457285   67149 cri.go:89] found id: ""
	I1028 18:30:01.457314   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.457326   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:01.457333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:01.457380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:01.490962   67149 cri.go:89] found id: ""
	I1028 18:30:01.490984   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.490992   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:01.491000   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:01.491012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:01.559906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:01.559937   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:01.559962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:01.639455   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:01.639485   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.681968   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:01.681994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:01.736639   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:01.736672   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.251876   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:04.265639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:04.265711   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:04.300133   67149 cri.go:89] found id: ""
	I1028 18:30:04.300159   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.300167   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:04.300173   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:04.300228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:04.335723   67149 cri.go:89] found id: ""
	I1028 18:30:04.335749   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.335760   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:04.335767   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:04.335825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:04.373009   67149 cri.go:89] found id: ""
	I1028 18:30:04.373030   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.373040   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:04.373048   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:04.373113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:04.405969   67149 cri.go:89] found id: ""
	I1028 18:30:04.405993   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.406003   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:04.406011   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:04.406066   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:04.441067   67149 cri.go:89] found id: ""
	I1028 18:30:04.441095   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.441106   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:04.441112   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:04.441176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:04.475231   67149 cri.go:89] found id: ""
	I1028 18:30:04.475260   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.475270   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:04.475277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:04.475342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:04.512970   67149 cri.go:89] found id: ""
	I1028 18:30:04.512998   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.513009   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:04.513017   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:04.513078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:04.547857   67149 cri.go:89] found id: ""
	I1028 18:30:04.547880   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.547890   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:04.547901   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:04.547913   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:04.598870   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:04.598900   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.612678   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:04.612705   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:04.686945   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:04.686967   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:04.686979   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:04.764943   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:04.764992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:03.113309   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.113449   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.613568   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:03.902735   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.903116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:06.872012   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:09.371576   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.310905   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:07.323880   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:07.323946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:07.363597   67149 cri.go:89] found id: ""
	I1028 18:30:07.363626   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.363637   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:07.363645   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:07.363706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:07.401051   67149 cri.go:89] found id: ""
	I1028 18:30:07.401073   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.401082   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:07.401089   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:07.401147   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:07.439710   67149 cri.go:89] found id: ""
	I1028 18:30:07.439735   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.439743   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:07.439748   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:07.439796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:07.476627   67149 cri.go:89] found id: ""
	I1028 18:30:07.476653   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.476663   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:07.476670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:07.476747   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:07.508770   67149 cri.go:89] found id: ""
	I1028 18:30:07.508796   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.508807   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:07.508814   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:07.508874   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:07.543467   67149 cri.go:89] found id: ""
	I1028 18:30:07.543496   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.543506   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:07.543514   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:07.543575   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:07.577181   67149 cri.go:89] found id: ""
	I1028 18:30:07.577204   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.577212   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:07.577217   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:07.577266   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:07.611862   67149 cri.go:89] found id: ""
	I1028 18:30:07.611886   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.611896   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:07.611906   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:07.611924   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:07.699794   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:07.699833   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.747920   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:07.747948   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:07.797402   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:07.797434   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:07.811752   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:07.811778   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:07.881604   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.382191   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:10.394572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:10.394624   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:10.428941   67149 cri.go:89] found id: ""
	I1028 18:30:10.428973   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.428984   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:10.429004   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:10.429071   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:10.462526   67149 cri.go:89] found id: ""
	I1028 18:30:10.462558   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.462569   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:10.462578   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:10.462641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:10.498472   67149 cri.go:89] found id: ""
	I1028 18:30:10.498495   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.498503   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:10.498509   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:10.498557   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:10.535400   67149 cri.go:89] found id: ""
	I1028 18:30:10.535422   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.535430   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:10.535436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:10.535483   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:10.568961   67149 cri.go:89] found id: ""
	I1028 18:30:10.568981   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.568988   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:10.568994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:10.569041   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:10.601273   67149 cri.go:89] found id: ""
	I1028 18:30:10.601306   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.601318   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:10.601325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:10.601383   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:10.638093   67149 cri.go:89] found id: ""
	I1028 18:30:10.638124   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.638135   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:10.638141   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:10.638203   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:10.674624   67149 cri.go:89] found id: ""
	I1028 18:30:10.674654   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.674665   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:10.674675   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:10.674688   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:10.714568   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:10.714602   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:10.764732   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:10.764765   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:10.778111   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:10.778139   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:10.854488   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.854516   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:10.854531   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:10.113469   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.614268   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:08.401958   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:10.402159   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.402379   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:11.872789   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.372947   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:13.438803   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:13.452322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:13.452397   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:13.487337   67149 cri.go:89] found id: ""
	I1028 18:30:13.487360   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.487369   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:13.487381   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:13.487488   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:13.521992   67149 cri.go:89] found id: ""
	I1028 18:30:13.522024   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.522034   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:13.522041   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:13.522099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:13.555315   67149 cri.go:89] found id: ""
	I1028 18:30:13.555347   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.555363   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:13.555371   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:13.555431   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:13.589401   67149 cri.go:89] found id: ""
	I1028 18:30:13.589425   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.589436   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:13.589445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:13.589493   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:13.629340   67149 cri.go:89] found id: ""
	I1028 18:30:13.629370   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.629385   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:13.629393   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:13.629454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:13.667307   67149 cri.go:89] found id: ""
	I1028 18:30:13.667337   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.667348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:13.667355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:13.667418   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:13.701457   67149 cri.go:89] found id: ""
	I1028 18:30:13.701513   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.701526   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:13.701536   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:13.701594   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:13.737989   67149 cri.go:89] found id: ""
	I1028 18:30:13.738023   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.738033   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:13.738043   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:13.738056   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:13.791743   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:13.791777   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:13.805501   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:13.805529   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:13.882239   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:13.882262   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:13.882276   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.963480   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:13.963516   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:15.112587   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:17.113242   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.901879   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.902869   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.871650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:18.872448   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.502799   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:16.516397   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:16.516456   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:16.551670   67149 cri.go:89] found id: ""
	I1028 18:30:16.551701   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.551712   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:16.551719   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:16.551771   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:16.584390   67149 cri.go:89] found id: ""
	I1028 18:30:16.584417   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.584428   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:16.584435   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:16.584510   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:16.620868   67149 cri.go:89] found id: ""
	I1028 18:30:16.620892   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.620899   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:16.620904   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:16.620949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:16.654189   67149 cri.go:89] found id: ""
	I1028 18:30:16.654216   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.654225   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:16.654231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:16.654284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:16.694526   67149 cri.go:89] found id: ""
	I1028 18:30:16.694557   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.694568   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:16.694575   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:16.694640   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:16.728857   67149 cri.go:89] found id: ""
	I1028 18:30:16.728884   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.728892   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:16.728898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:16.728948   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:16.763198   67149 cri.go:89] found id: ""
	I1028 18:30:16.763220   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.763227   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:16.763232   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:16.763282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:16.800120   67149 cri.go:89] found id: ""
	I1028 18:30:16.800142   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.800149   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:16.800157   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:16.800167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:16.852710   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:16.852736   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:16.867365   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:16.867395   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:16.945605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:16.945627   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:16.945643   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:17.022838   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:17.022871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.563585   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:19.577612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:19.577683   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:19.615797   67149 cri.go:89] found id: ""
	I1028 18:30:19.615820   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.615829   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:19.615836   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:19.615882   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:19.654780   67149 cri.go:89] found id: ""
	I1028 18:30:19.654802   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.654810   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:19.654816   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:19.654873   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:19.693502   67149 cri.go:89] found id: ""
	I1028 18:30:19.693532   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.693542   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:19.693550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:19.693611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:19.731869   67149 cri.go:89] found id: ""
	I1028 18:30:19.731902   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.731910   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:19.731916   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:19.731974   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:19.765046   67149 cri.go:89] found id: ""
	I1028 18:30:19.765081   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.765092   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:19.765099   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:19.765158   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:19.798082   67149 cri.go:89] found id: ""
	I1028 18:30:19.798105   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.798113   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:19.798119   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:19.798172   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:19.832562   67149 cri.go:89] found id: ""
	I1028 18:30:19.832590   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.832601   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:19.832608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:19.832676   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:19.867213   67149 cri.go:89] found id: ""
	I1028 18:30:19.867240   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.867251   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:19.867260   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:19.867277   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:19.942276   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:19.942304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.977642   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:19.977671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:20.027077   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:20.027109   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:20.040159   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:20.040181   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:20.113350   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:19.113850   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.613505   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:19.402671   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.902317   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.372438   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.872137   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:22.614379   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:22.628550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:22.628607   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:22.662647   67149 cri.go:89] found id: ""
	I1028 18:30:22.662670   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.662677   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:22.662683   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:22.662732   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:22.696697   67149 cri.go:89] found id: ""
	I1028 18:30:22.696736   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.696747   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:22.696753   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:22.696815   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:22.730011   67149 cri.go:89] found id: ""
	I1028 18:30:22.730039   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.730049   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:22.730056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:22.730114   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:22.766604   67149 cri.go:89] found id: ""
	I1028 18:30:22.766629   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.766639   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:22.766647   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:22.766703   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:22.800581   67149 cri.go:89] found id: ""
	I1028 18:30:22.800608   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.800617   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:22.800625   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:22.800692   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:22.832742   67149 cri.go:89] found id: ""
	I1028 18:30:22.832767   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.832775   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:22.832780   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:22.832823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:22.865850   67149 cri.go:89] found id: ""
	I1028 18:30:22.865876   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.865885   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:22.865892   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:22.865949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:22.904410   67149 cri.go:89] found id: ""
	I1028 18:30:22.904433   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.904443   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:22.904454   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:22.904482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:22.959275   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:22.959310   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:22.972630   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:22.972652   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:23.043851   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:23.043873   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:23.043886   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:23.121657   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:23.121686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:25.662109   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:25.676366   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:25.676443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:25.715192   67149 cri.go:89] found id: ""
	I1028 18:30:25.715216   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.715224   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:25.715230   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:25.715283   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:25.754736   67149 cri.go:89] found id: ""
	I1028 18:30:25.754765   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.754773   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:25.754779   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:25.754823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:25.794179   67149 cri.go:89] found id: ""
	I1028 18:30:25.794207   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.794216   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:25.794224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:25.794278   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:25.833206   67149 cri.go:89] found id: ""
	I1028 18:30:25.833238   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.833246   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:25.833252   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:25.833298   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:25.871628   67149 cri.go:89] found id: ""
	I1028 18:30:25.871659   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.871669   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:25.871677   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:25.871735   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:25.910900   67149 cri.go:89] found id: ""
	I1028 18:30:25.910924   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.910934   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:25.910942   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:25.911001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:25.943972   67149 cri.go:89] found id: ""
	I1028 18:30:25.943992   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.943999   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:25.944004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:25.944059   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:25.982521   67149 cri.go:89] found id: ""
	I1028 18:30:25.982544   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.982551   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:25.982559   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:25.982569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:26.033003   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:26.033031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:26.046480   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:26.046503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 18:30:24.112244   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.113815   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.902652   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.402135   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:25.873075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.372129   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	W1028 18:30:26.117194   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:26.117213   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:26.117230   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:26.195399   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:26.195430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:28.737237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:28.751846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:28.751910   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:28.794259   67149 cri.go:89] found id: ""
	I1028 18:30:28.794290   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.794301   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:28.794308   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:28.794374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:28.827573   67149 cri.go:89] found id: ""
	I1028 18:30:28.827603   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.827611   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:28.827616   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:28.827671   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:28.860676   67149 cri.go:89] found id: ""
	I1028 18:30:28.860702   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.860713   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:28.860721   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:28.860780   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:28.897302   67149 cri.go:89] found id: ""
	I1028 18:30:28.897327   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.897343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:28.897351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:28.897410   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:28.933425   67149 cri.go:89] found id: ""
	I1028 18:30:28.933454   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.933464   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:28.933471   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:28.933535   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:28.966004   67149 cri.go:89] found id: ""
	I1028 18:30:28.966032   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.966043   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:28.966051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:28.966107   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:29.002788   67149 cri.go:89] found id: ""
	I1028 18:30:29.002818   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.002829   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:29.002835   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:29.002894   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:29.033351   67149 cri.go:89] found id: ""
	I1028 18:30:29.033379   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.033389   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:29.033400   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:29.033420   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:29.107997   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:29.108025   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:29.144727   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:29.144753   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:29.206487   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:29.206521   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:29.219722   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:29.219744   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:29.288254   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:28.612485   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.113113   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.902960   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.871338   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.372081   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.789035   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:31.802587   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:31.802650   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:31.838372   67149 cri.go:89] found id: ""
	I1028 18:30:31.838401   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.838410   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:31.838416   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:31.838469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:31.877794   67149 cri.go:89] found id: ""
	I1028 18:30:31.877822   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.877833   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:31.877840   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:31.877896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:31.917442   67149 cri.go:89] found id: ""
	I1028 18:30:31.917472   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.917483   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:31.917490   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:31.917549   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:31.951900   67149 cri.go:89] found id: ""
	I1028 18:30:31.951931   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.951943   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:31.951951   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:31.952008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:31.988011   67149 cri.go:89] found id: ""
	I1028 18:30:31.988040   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.988051   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:31.988058   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:31.988116   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:32.021042   67149 cri.go:89] found id: ""
	I1028 18:30:32.021063   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.021071   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:32.021077   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:32.021124   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:32.053748   67149 cri.go:89] found id: ""
	I1028 18:30:32.053770   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.053778   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:32.053783   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:32.053837   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:32.089725   67149 cri.go:89] found id: ""
	I1028 18:30:32.089756   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.089766   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:32.089777   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:32.089790   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:32.140000   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:32.140031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:32.154023   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:32.154046   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:32.231222   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:32.231242   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:32.231255   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:32.311354   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:32.311388   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:34.852507   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:34.867133   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:34.867198   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:34.901201   67149 cri.go:89] found id: ""
	I1028 18:30:34.901228   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.901238   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:34.901245   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:34.901300   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:34.962788   67149 cri.go:89] found id: ""
	I1028 18:30:34.962814   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.962824   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:34.962835   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:34.962896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:34.996879   67149 cri.go:89] found id: ""
	I1028 18:30:34.996906   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.996917   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:34.996926   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:34.996986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:35.033516   67149 cri.go:89] found id: ""
	I1028 18:30:35.033541   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.033553   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:35.033560   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:35.033622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:35.066903   67149 cri.go:89] found id: ""
	I1028 18:30:35.066933   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.066945   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:35.066953   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:35.067010   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:35.099675   67149 cri.go:89] found id: ""
	I1028 18:30:35.099697   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.099704   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:35.099710   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:35.099755   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:35.133595   67149 cri.go:89] found id: ""
	I1028 18:30:35.133623   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.133633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:35.133641   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:35.133699   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:35.172236   67149 cri.go:89] found id: ""
	I1028 18:30:35.172262   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.172272   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:35.172282   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:35.172296   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:35.224952   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:35.224981   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:35.238554   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:35.238578   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:35.318991   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:35.319024   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:35.319040   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:35.399763   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:35.399799   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:33.612446   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.613847   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.402375   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.402653   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.902346   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:38.372413   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.947847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:37.963147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:37.963210   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.001768   67149 cri.go:89] found id: ""
	I1028 18:30:38.001792   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.001802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:38.001809   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:38.001868   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:38.042877   67149 cri.go:89] found id: ""
	I1028 18:30:38.042905   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.042916   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:38.042924   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:38.042986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:38.078116   67149 cri.go:89] found id: ""
	I1028 18:30:38.078143   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.078154   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:38.078162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:38.078226   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:38.111082   67149 cri.go:89] found id: ""
	I1028 18:30:38.111108   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.111119   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:38.111127   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:38.111187   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:38.144863   67149 cri.go:89] found id: ""
	I1028 18:30:38.144889   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.144898   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:38.144906   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:38.144962   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:38.178671   67149 cri.go:89] found id: ""
	I1028 18:30:38.178701   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.178712   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:38.178719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:38.178774   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:38.218441   67149 cri.go:89] found id: ""
	I1028 18:30:38.218464   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.218472   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:38.218477   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:38.218528   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:38.252697   67149 cri.go:89] found id: ""
	I1028 18:30:38.252719   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.252727   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:38.252736   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:38.252745   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:38.304813   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:38.304853   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:38.318437   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:38.318462   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:38.389959   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:38.389987   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:38.390002   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:38.471462   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:38.471495   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:41.013647   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:41.027167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:41.027233   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.113426   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:39.903261   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.402381   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.871193   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.873502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:41.062559   67149 cri.go:89] found id: ""
	I1028 18:30:41.062590   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.062601   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:41.062609   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:41.062667   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:41.097732   67149 cri.go:89] found id: ""
	I1028 18:30:41.097758   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.097767   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:41.097773   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:41.097819   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:41.133067   67149 cri.go:89] found id: ""
	I1028 18:30:41.133089   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.133097   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:41.133102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:41.133150   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:41.168640   67149 cri.go:89] found id: ""
	I1028 18:30:41.168674   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.168684   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:41.168691   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:41.168754   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:41.206429   67149 cri.go:89] found id: ""
	I1028 18:30:41.206453   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.206463   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:41.206470   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:41.206527   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:41.248326   67149 cri.go:89] found id: ""
	I1028 18:30:41.248350   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.248360   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:41.248369   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:41.248429   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:41.283703   67149 cri.go:89] found id: ""
	I1028 18:30:41.283734   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.283746   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:41.283753   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:41.283810   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:41.327759   67149 cri.go:89] found id: ""
	I1028 18:30:41.327786   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.327796   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:41.327807   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:41.327820   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:41.388563   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:41.388593   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:41.406411   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:41.406435   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:41.490605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:41.490626   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:41.490637   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:41.569386   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:41.569433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.109394   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:44.123047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:44.123113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:44.156762   67149 cri.go:89] found id: ""
	I1028 18:30:44.156792   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.156802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:44.156810   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:44.156867   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:44.192244   67149 cri.go:89] found id: ""
	I1028 18:30:44.192271   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.192282   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:44.192289   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:44.192357   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:44.224059   67149 cri.go:89] found id: ""
	I1028 18:30:44.224094   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.224101   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:44.224115   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:44.224168   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:44.258750   67149 cri.go:89] found id: ""
	I1028 18:30:44.258779   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.258789   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:44.258797   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:44.258854   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:44.295600   67149 cri.go:89] found id: ""
	I1028 18:30:44.295624   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.295632   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:44.295638   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:44.295684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:44.327278   67149 cri.go:89] found id: ""
	I1028 18:30:44.327302   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.327309   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:44.327315   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:44.327370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:44.360734   67149 cri.go:89] found id: ""
	I1028 18:30:44.360760   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.360768   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:44.360774   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:44.360822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:44.398198   67149 cri.go:89] found id: ""
	I1028 18:30:44.398224   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.398234   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:44.398249   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:44.398261   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:44.476135   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:44.476167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.514073   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:44.514105   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:44.563001   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:44.563033   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:44.576882   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:44.576912   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:44.648532   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:43.112043   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.113135   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.113382   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:44.403147   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:46.902890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.370854   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.371758   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.373946   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.149133   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:47.165612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:47.165696   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:47.203960   67149 cri.go:89] found id: ""
	I1028 18:30:47.203987   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.203996   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:47.204002   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:47.204065   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:47.236731   67149 cri.go:89] found id: ""
	I1028 18:30:47.236757   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.236766   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:47.236774   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:47.236828   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:47.273779   67149 cri.go:89] found id: ""
	I1028 18:30:47.273808   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.273820   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:47.273826   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:47.273878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:47.309996   67149 cri.go:89] found id: ""
	I1028 18:30:47.310020   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.310028   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:47.310034   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:47.310108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:47.352904   67149 cri.go:89] found id: ""
	I1028 18:30:47.352925   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.352934   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:47.352939   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:47.352990   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:47.389641   67149 cri.go:89] found id: ""
	I1028 18:30:47.389660   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.389667   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:47.389672   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:47.389718   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:47.422591   67149 cri.go:89] found id: ""
	I1028 18:30:47.422622   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.422632   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:47.422639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:47.422694   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:47.454849   67149 cri.go:89] found id: ""
	I1028 18:30:47.454876   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.454886   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:47.454895   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:47.454916   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:47.506176   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:47.506203   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:47.519084   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:47.519108   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:47.585660   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.585681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:47.585696   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:47.664904   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:47.664939   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:50.203775   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:50.216923   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:50.216992   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:50.252506   67149 cri.go:89] found id: ""
	I1028 18:30:50.252531   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.252541   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:50.252548   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:50.252608   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:50.288641   67149 cri.go:89] found id: ""
	I1028 18:30:50.288669   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.288678   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:50.288684   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:50.288739   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:50.322130   67149 cri.go:89] found id: ""
	I1028 18:30:50.322163   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.322174   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:50.322182   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:50.322240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:50.359508   67149 cri.go:89] found id: ""
	I1028 18:30:50.359536   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.359546   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:50.359554   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:50.359617   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:50.393571   67149 cri.go:89] found id: ""
	I1028 18:30:50.393607   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.393618   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:50.393626   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:50.393685   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:50.428683   67149 cri.go:89] found id: ""
	I1028 18:30:50.428705   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.428713   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:50.428719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:50.428767   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:50.464086   67149 cri.go:89] found id: ""
	I1028 18:30:50.464111   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.464119   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:50.464125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:50.464183   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:50.496695   67149 cri.go:89] found id: ""
	I1028 18:30:50.496726   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.496736   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:50.496745   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:50.496755   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:50.545495   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:50.545526   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:50.558819   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:50.558852   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:50.636344   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:50.636369   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:50.636384   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:50.720270   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:50.720304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:49.612927   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.613353   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.402779   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.901517   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.873490   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:54.372373   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.261194   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:53.274451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:53.274507   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:53.306258   67149 cri.go:89] found id: ""
	I1028 18:30:53.306286   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.306295   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:53.306301   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:53.306362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:53.340222   67149 cri.go:89] found id: ""
	I1028 18:30:53.340244   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.340253   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:53.340258   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:53.340322   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:53.377726   67149 cri.go:89] found id: ""
	I1028 18:30:53.377750   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.377760   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:53.377767   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:53.377820   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:53.414228   67149 cri.go:89] found id: ""
	I1028 18:30:53.414252   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.414262   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:53.414275   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:53.414332   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:53.449152   67149 cri.go:89] found id: ""
	I1028 18:30:53.449179   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.449186   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:53.449192   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:53.449237   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:53.485678   67149 cri.go:89] found id: ""
	I1028 18:30:53.485705   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.485716   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:53.485723   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:53.485784   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:53.520764   67149 cri.go:89] found id: ""
	I1028 18:30:53.520791   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.520802   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:53.520810   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:53.520870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:53.561153   67149 cri.go:89] found id: ""
	I1028 18:30:53.561176   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.561184   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:53.561192   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:53.561202   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:53.642192   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:53.642242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.686527   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:53.686567   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:53.740815   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:53.740849   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:53.754577   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:53.754604   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:53.823717   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:54.112985   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.612820   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.903128   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:55.903482   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.372798   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.871814   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.324847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:56.338572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:56.338628   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:56.375482   67149 cri.go:89] found id: ""
	I1028 18:30:56.375506   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.375517   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:56.375524   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:56.375580   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:56.407894   67149 cri.go:89] found id: ""
	I1028 18:30:56.407921   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.407931   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:56.407938   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:56.407993   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:56.447006   67149 cri.go:89] found id: ""
	I1028 18:30:56.447037   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.447048   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:56.447055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:56.447112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:56.483850   67149 cri.go:89] found id: ""
	I1028 18:30:56.483880   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.483890   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:56.483898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:56.483958   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:56.520008   67149 cri.go:89] found id: ""
	I1028 18:30:56.520038   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.520045   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:56.520051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:56.520099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:56.552567   67149 cri.go:89] found id: ""
	I1028 18:30:56.552592   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.552600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:56.552608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:56.552658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:56.591277   67149 cri.go:89] found id: ""
	I1028 18:30:56.591297   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.591305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:56.591311   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:56.591362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:56.632164   67149 cri.go:89] found id: ""
	I1028 18:30:56.632186   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.632194   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:56.632202   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:56.632219   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:56.683590   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:56.683623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:56.698509   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:56.698539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:56.777141   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.777171   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:56.777188   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:56.851801   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:56.851842   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.394266   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:59.408460   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:59.408545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:59.444066   67149 cri.go:89] found id: ""
	I1028 18:30:59.444092   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.444104   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:59.444112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:59.444165   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:59.479531   67149 cri.go:89] found id: ""
	I1028 18:30:59.479557   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.479568   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:59.479576   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:59.479622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:59.519467   67149 cri.go:89] found id: ""
	I1028 18:30:59.519489   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.519496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:59.519502   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:59.519546   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:59.551108   67149 cri.go:89] found id: ""
	I1028 18:30:59.551131   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.551140   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:59.551146   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:59.551197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:59.585875   67149 cri.go:89] found id: ""
	I1028 18:30:59.585899   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.585907   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:59.585912   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:59.585968   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:59.620571   67149 cri.go:89] found id: ""
	I1028 18:30:59.620595   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.620602   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:59.620608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:59.620655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:59.653927   67149 cri.go:89] found id: ""
	I1028 18:30:59.653954   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.653965   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:59.653972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:59.654039   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:59.689138   67149 cri.go:89] found id: ""
	I1028 18:30:59.689160   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.689168   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:59.689175   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:59.689185   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:59.768231   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:59.768270   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.811980   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:59.812007   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:59.864509   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:59.864543   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:59.879329   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:59.879354   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:59.950134   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:59.112280   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:01.113341   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.402845   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.902628   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.904642   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.872873   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:03.371672   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.450237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:02.464689   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:02.464765   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:02.500938   67149 cri.go:89] found id: ""
	I1028 18:31:02.500964   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.500975   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:02.500982   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:02.501043   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:02.534580   67149 cri.go:89] found id: ""
	I1028 18:31:02.534608   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.534620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:02.534628   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:02.534684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:02.570203   67149 cri.go:89] found id: ""
	I1028 18:31:02.570224   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.570231   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:02.570237   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:02.570284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:02.606037   67149 cri.go:89] found id: ""
	I1028 18:31:02.606064   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.606072   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:02.606082   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:02.606135   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:02.640622   67149 cri.go:89] found id: ""
	I1028 18:31:02.640646   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.640656   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:02.640663   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:02.640723   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:02.676406   67149 cri.go:89] found id: ""
	I1028 18:31:02.676434   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.676444   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:02.676451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:02.676520   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:02.710284   67149 cri.go:89] found id: ""
	I1028 18:31:02.710308   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.710316   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:02.710322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:02.710376   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:02.750853   67149 cri.go:89] found id: ""
	I1028 18:31:02.750899   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.750910   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:02.750918   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:02.750929   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:02.825886   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.825913   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:02.825927   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:02.904828   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:02.904857   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:02.941886   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:02.941922   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:02.991603   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:02.991632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.505655   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:05.520582   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:05.520638   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:05.558724   67149 cri.go:89] found id: ""
	I1028 18:31:05.558753   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.558763   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:05.558770   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:05.558816   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:05.597864   67149 cri.go:89] found id: ""
	I1028 18:31:05.597885   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.597893   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:05.597898   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:05.597956   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:05.643571   67149 cri.go:89] found id: ""
	I1028 18:31:05.643602   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.643613   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:05.643620   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:05.643679   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:05.682010   67149 cri.go:89] found id: ""
	I1028 18:31:05.682039   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.682048   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:05.682053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:05.682106   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:05.716043   67149 cri.go:89] found id: ""
	I1028 18:31:05.716067   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.716080   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:05.716086   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:05.716134   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:05.750962   67149 cri.go:89] found id: ""
	I1028 18:31:05.750995   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.751010   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:05.751016   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:05.751078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:05.785059   67149 cri.go:89] found id: ""
	I1028 18:31:05.785111   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.785124   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:05.785132   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:05.785193   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:05.833525   67149 cri.go:89] found id: ""
	I1028 18:31:05.833550   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.833559   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:05.833566   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:05.833579   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:05.887766   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:05.887796   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.902575   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:05.902606   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:05.975082   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:05.975108   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:05.975122   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:03.613265   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.114362   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.402167   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:07.402252   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.873147   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:08.370748   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.050369   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:06.050396   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.593506   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:08.606188   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:08.606251   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:08.645186   67149 cri.go:89] found id: ""
	I1028 18:31:08.645217   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.645227   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:08.645235   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:08.645294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:08.680728   67149 cri.go:89] found id: ""
	I1028 18:31:08.680759   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.680771   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:08.680778   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:08.680833   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:08.714733   67149 cri.go:89] found id: ""
	I1028 18:31:08.714760   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.714772   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:08.714779   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:08.714844   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:08.750293   67149 cri.go:89] found id: ""
	I1028 18:31:08.750323   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.750333   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:08.750339   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:08.750390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:08.784521   67149 cri.go:89] found id: ""
	I1028 18:31:08.784548   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.784559   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:08.784566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:08.784629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:08.818808   67149 cri.go:89] found id: ""
	I1028 18:31:08.818838   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.818849   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:08.818857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:08.818920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:08.855575   67149 cri.go:89] found id: ""
	I1028 18:31:08.855608   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.855619   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:08.855633   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:08.855690   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:08.892996   67149 cri.go:89] found id: ""
	I1028 18:31:08.893024   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.893035   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:08.893045   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:08.893064   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.937056   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:08.937084   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:08.989013   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:08.989048   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:09.002048   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:09.002077   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:09.075247   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:09.075277   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:09.075290   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:08.612396   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.612689   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:09.402595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.903403   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.371335   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:12.371435   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.371502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.654701   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:11.668066   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:11.668146   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:11.701666   67149 cri.go:89] found id: ""
	I1028 18:31:11.701693   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.701703   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:11.701710   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:11.701769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:11.738342   67149 cri.go:89] found id: ""
	I1028 18:31:11.738368   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.738376   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:11.738381   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:11.738428   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:11.772009   67149 cri.go:89] found id: ""
	I1028 18:31:11.772035   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.772045   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:11.772053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:11.772118   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:11.816210   67149 cri.go:89] found id: ""
	I1028 18:31:11.816237   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.816245   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:11.816251   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:11.816314   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:11.856675   67149 cri.go:89] found id: ""
	I1028 18:31:11.856704   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.856714   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:11.856722   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:11.856785   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:11.896566   67149 cri.go:89] found id: ""
	I1028 18:31:11.896592   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.896600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:11.896606   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:11.896665   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:11.932599   67149 cri.go:89] found id: ""
	I1028 18:31:11.932624   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.932633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:11.932640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:11.932704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:11.966952   67149 cri.go:89] found id: ""
	I1028 18:31:11.966982   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.967008   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:11.967019   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:11.967037   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:12.016465   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:12.016502   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:12.029314   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:12.029343   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:12.098906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:12.098936   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:12.098954   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:12.176440   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:12.176489   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:14.720173   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:14.733796   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:14.733848   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:14.774072   67149 cri.go:89] found id: ""
	I1028 18:31:14.774093   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.774100   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:14.774106   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:14.774152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:14.816116   67149 cri.go:89] found id: ""
	I1028 18:31:14.816145   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.816158   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:14.816166   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:14.816224   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:14.851167   67149 cri.go:89] found id: ""
	I1028 18:31:14.851189   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.851196   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:14.851202   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:14.851247   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:14.885887   67149 cri.go:89] found id: ""
	I1028 18:31:14.885918   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.885926   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:14.885931   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:14.885976   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:14.923787   67149 cri.go:89] found id: ""
	I1028 18:31:14.923815   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.923826   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:14.923833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:14.923892   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:14.960117   67149 cri.go:89] found id: ""
	I1028 18:31:14.960148   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.960160   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:14.960167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:14.960240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:14.998418   67149 cri.go:89] found id: ""
	I1028 18:31:14.998458   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.998470   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:14.998485   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:14.998545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:15.031985   67149 cri.go:89] found id: ""
	I1028 18:31:15.032005   67149 logs.go:282] 0 containers: []
	W1028 18:31:15.032014   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:15.032027   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:15.032038   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:15.045239   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:15.045264   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:15.118954   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:15.118978   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:15.118994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:15.200538   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:15.200569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:15.243581   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:15.243603   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:13.112232   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:15.113498   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.612946   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.401769   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.402729   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.871916   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.872378   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.794670   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:17.808325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:17.808380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:17.841888   67149 cri.go:89] found id: ""
	I1028 18:31:17.841911   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.841919   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:17.841925   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:17.841979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:17.881241   67149 cri.go:89] found id: ""
	I1028 18:31:17.881261   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.881269   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:17.881274   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:17.881331   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:17.922394   67149 cri.go:89] found id: ""
	I1028 18:31:17.922419   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.922428   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:17.922434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:17.922498   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:17.963519   67149 cri.go:89] found id: ""
	I1028 18:31:17.963546   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.963558   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:17.963566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:17.963641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:18.003181   67149 cri.go:89] found id: ""
	I1028 18:31:18.003202   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.003209   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:18.003214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:18.003261   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:18.040305   67149 cri.go:89] found id: ""
	I1028 18:31:18.040338   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.040348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:18.040356   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:18.040413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:18.077671   67149 cri.go:89] found id: ""
	I1028 18:31:18.077696   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.077708   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:18.077715   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:18.077777   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:18.116155   67149 cri.go:89] found id: ""
	I1028 18:31:18.116176   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.116182   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:18.116190   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:18.116201   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:18.168343   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:18.168370   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:18.181962   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:18.181988   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:18.260227   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:18.260251   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:18.260265   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:18.346588   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:18.346620   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:20.885832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:20.899053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:20.899121   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:20.954770   67149 cri.go:89] found id: ""
	I1028 18:31:20.954797   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.954806   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:20.954812   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:20.954870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:20.989809   67149 cri.go:89] found id: ""
	I1028 18:31:20.989834   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.989842   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:20.989848   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:20.989900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:21.027150   67149 cri.go:89] found id: ""
	I1028 18:31:21.027179   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.027191   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:21.027199   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:21.027259   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:20.113283   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:22.612710   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.902738   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.403607   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.371574   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.871000   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.061235   67149 cri.go:89] found id: ""
	I1028 18:31:21.061260   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.061270   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:21.061277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:21.061337   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:21.095451   67149 cri.go:89] found id: ""
	I1028 18:31:21.095473   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.095481   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:21.095487   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:21.095540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:21.135576   67149 cri.go:89] found id: ""
	I1028 18:31:21.135598   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.135606   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:21.135612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:21.135660   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:21.170816   67149 cri.go:89] found id: ""
	I1028 18:31:21.170845   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.170854   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:21.170860   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:21.170920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:21.204616   67149 cri.go:89] found id: ""
	I1028 18:31:21.204649   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.204660   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:21.204672   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:21.204686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:21.254523   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:21.254556   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:21.267981   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:21.268005   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:21.336786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:21.336813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:21.336828   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:21.420596   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:21.420625   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:23.962346   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:23.976628   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:23.976697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:24.016418   67149 cri.go:89] found id: ""
	I1028 18:31:24.016444   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.016453   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:24.016458   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:24.016533   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:24.051448   67149 cri.go:89] found id: ""
	I1028 18:31:24.051474   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.051483   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:24.051488   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:24.051554   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:24.090787   67149 cri.go:89] found id: ""
	I1028 18:31:24.090816   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.090829   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:24.090836   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:24.090900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:24.126315   67149 cri.go:89] found id: ""
	I1028 18:31:24.126342   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.126349   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:24.126355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:24.126402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:24.161340   67149 cri.go:89] found id: ""
	I1028 18:31:24.161367   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.161379   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:24.161387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:24.161445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:24.195991   67149 cri.go:89] found id: ""
	I1028 18:31:24.196017   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.196028   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:24.196036   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:24.196084   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:24.229789   67149 cri.go:89] found id: ""
	I1028 18:31:24.229822   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.229837   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:24.229845   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:24.229906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:24.264724   67149 cri.go:89] found id: ""
	I1028 18:31:24.264748   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.264757   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:24.264765   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:24.264775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:24.303551   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:24.303574   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:24.351693   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:24.351725   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:24.364537   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:24.364566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:24.436935   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:24.436955   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:24.436966   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:25.112870   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.612492   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.902008   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.902544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.902622   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.871089   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.871265   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:29.872201   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.014928   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:27.029540   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:27.029609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:27.064598   67149 cri.go:89] found id: ""
	I1028 18:31:27.064626   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.064636   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:27.064643   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:27.064704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:27.099432   67149 cri.go:89] found id: ""
	I1028 18:31:27.099455   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.099465   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:27.099472   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:27.099531   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:27.133961   67149 cri.go:89] found id: ""
	I1028 18:31:27.133996   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.134006   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:27.134012   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:27.134075   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:27.171976   67149 cri.go:89] found id: ""
	I1028 18:31:27.172003   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.172014   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:27.172022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:27.172092   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:27.205681   67149 cri.go:89] found id: ""
	I1028 18:31:27.205710   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.205721   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:27.205730   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:27.205793   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:27.244571   67149 cri.go:89] found id: ""
	I1028 18:31:27.244603   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.244612   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:27.244617   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:27.244661   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:27.281692   67149 cri.go:89] found id: ""
	I1028 18:31:27.281722   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.281738   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:27.281746   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:27.281800   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:27.335003   67149 cri.go:89] found id: ""
	I1028 18:31:27.335033   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.335041   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:27.335049   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:27.335066   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:27.353992   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:27.354017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:27.457103   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:27.457125   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:27.457136   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.537717   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:27.537746   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:27.579842   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:27.579870   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.133749   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:30.147518   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:30.147576   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:30.182683   67149 cri.go:89] found id: ""
	I1028 18:31:30.182711   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.182722   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:30.182729   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:30.182792   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:30.215088   67149 cri.go:89] found id: ""
	I1028 18:31:30.215109   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.215118   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:30.215124   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:30.215176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:30.250169   67149 cri.go:89] found id: ""
	I1028 18:31:30.250194   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.250202   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:30.250207   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:30.250284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:30.286028   67149 cri.go:89] found id: ""
	I1028 18:31:30.286055   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.286062   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:30.286069   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:30.286112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:30.320503   67149 cri.go:89] found id: ""
	I1028 18:31:30.320528   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.320539   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:30.320547   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:30.320604   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:30.352773   67149 cri.go:89] found id: ""
	I1028 18:31:30.352793   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.352800   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:30.352806   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:30.352859   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:30.385922   67149 cri.go:89] found id: ""
	I1028 18:31:30.385944   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.385951   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:30.385956   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:30.385999   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:30.421909   67149 cri.go:89] found id: ""
	I1028 18:31:30.421933   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.421945   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:30.421956   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:30.421971   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.470917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:30.470944   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:30.484033   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:30.484059   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:30.554810   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:30.554836   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:30.554850   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:30.634403   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:30.634432   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:30.113496   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.613397   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:30.402688   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.902277   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:31.872598   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:34.371198   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:33.182127   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:33.194994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:33.195063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:33.233076   67149 cri.go:89] found id: ""
	I1028 18:31:33.233098   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.233106   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:33.233112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:33.233160   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:33.266963   67149 cri.go:89] found id: ""
	I1028 18:31:33.266998   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.267021   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:33.267028   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:33.267083   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:33.305888   67149 cri.go:89] found id: ""
	I1028 18:31:33.305914   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.305922   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:33.305928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:33.305979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:33.339451   67149 cri.go:89] found id: ""
	I1028 18:31:33.339479   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.339489   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:33.339496   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:33.339555   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:33.375038   67149 cri.go:89] found id: ""
	I1028 18:31:33.375065   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.375073   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:33.375079   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:33.375125   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:33.409157   67149 cri.go:89] found id: ""
	I1028 18:31:33.409176   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.409183   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:33.409189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:33.409243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:33.449108   67149 cri.go:89] found id: ""
	I1028 18:31:33.449133   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.449149   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:33.449155   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:33.449227   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:33.491194   67149 cri.go:89] found id: ""
	I1028 18:31:33.491215   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.491224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:33.491232   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:33.491250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.530590   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:33.530618   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:33.581933   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:33.581962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:33.595387   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:33.595416   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:33.664855   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:33.664882   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:33.664899   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:35.113673   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.612606   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:35.401938   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.402270   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.372499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:38.372670   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.242724   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:36.256152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:36.256221   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:36.292452   67149 cri.go:89] found id: ""
	I1028 18:31:36.292494   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.292504   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:36.292511   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:36.292568   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:36.325210   67149 cri.go:89] found id: ""
	I1028 18:31:36.325231   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.325238   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:36.325244   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:36.325293   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:36.356738   67149 cri.go:89] found id: ""
	I1028 18:31:36.356757   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.356764   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:36.356769   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:36.356827   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:36.389678   67149 cri.go:89] found id: ""
	I1028 18:31:36.389704   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.389712   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:36.389717   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:36.389775   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:36.422956   67149 cri.go:89] found id: ""
	I1028 18:31:36.422989   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.422998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:36.423005   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:36.423061   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:36.456877   67149 cri.go:89] found id: ""
	I1028 18:31:36.456904   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.456914   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:36.456921   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:36.456983   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:36.489728   67149 cri.go:89] found id: ""
	I1028 18:31:36.489758   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.489766   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:36.489772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:36.489829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:36.524307   67149 cri.go:89] found id: ""
	I1028 18:31:36.524338   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.524350   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:36.524360   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:36.524372   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:36.574771   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:36.574800   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:36.587485   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:36.587506   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:36.655922   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:36.655949   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:36.655962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.738312   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:36.738352   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.279425   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:39.293108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:39.293167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:39.325542   67149 cri.go:89] found id: ""
	I1028 18:31:39.325573   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.325584   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:39.325592   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:39.325656   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:39.357581   67149 cri.go:89] found id: ""
	I1028 18:31:39.357609   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.357620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:39.357627   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:39.357681   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:39.394833   67149 cri.go:89] found id: ""
	I1028 18:31:39.394853   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.394860   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:39.394866   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:39.394916   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:39.430151   67149 cri.go:89] found id: ""
	I1028 18:31:39.430178   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.430188   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:39.430196   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:39.430254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:39.468060   67149 cri.go:89] found id: ""
	I1028 18:31:39.468089   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.468100   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:39.468108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:39.468181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:39.503702   67149 cri.go:89] found id: ""
	I1028 18:31:39.503734   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.503752   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:39.503761   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:39.503829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:39.536193   67149 cri.go:89] found id: ""
	I1028 18:31:39.536221   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.536233   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:39.536240   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:39.536305   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:39.570194   67149 cri.go:89] found id: ""
	I1028 18:31:39.570215   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.570224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:39.570232   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:39.570245   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:39.647179   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:39.647207   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:39.647220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:39.725980   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:39.726012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.765671   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:39.765704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:39.818257   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:39.818289   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:39.614055   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.112561   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:39.902061   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.402314   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:40.871483   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.872270   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.332335   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:42.344964   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:42.345031   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:42.380904   67149 cri.go:89] found id: ""
	I1028 18:31:42.380926   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.380933   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:42.380938   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:42.380982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:42.414361   67149 cri.go:89] found id: ""
	I1028 18:31:42.414385   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.414393   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:42.414399   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:42.414443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:42.447931   67149 cri.go:89] found id: ""
	I1028 18:31:42.447957   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.447968   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:42.447975   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:42.448024   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:42.483262   67149 cri.go:89] found id: ""
	I1028 18:31:42.483283   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.483296   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:42.483301   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:42.483365   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:42.516665   67149 cri.go:89] found id: ""
	I1028 18:31:42.516693   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.516702   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:42.516709   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:42.516776   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:42.550160   67149 cri.go:89] found id: ""
	I1028 18:31:42.550181   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.550188   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:42.550193   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:42.550238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:42.583509   67149 cri.go:89] found id: ""
	I1028 18:31:42.583535   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.583546   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:42.583552   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:42.583611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:42.619276   67149 cri.go:89] found id: ""
	I1028 18:31:42.619312   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.619320   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:42.619328   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:42.619338   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:42.692442   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:42.692487   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:42.731768   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:42.731798   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:42.783997   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:42.784043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.797809   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:42.797834   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:42.863351   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.363648   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:45.376277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:45.376341   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:45.415231   67149 cri.go:89] found id: ""
	I1028 18:31:45.415255   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.415265   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:45.415273   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:45.415330   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:45.451133   67149 cri.go:89] found id: ""
	I1028 18:31:45.451157   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.451164   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:45.451170   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:45.451228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:45.483526   67149 cri.go:89] found id: ""
	I1028 18:31:45.483552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.483562   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:45.483567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:45.483621   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:45.515799   67149 cri.go:89] found id: ""
	I1028 18:31:45.515828   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.515838   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:45.515846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:45.515906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:45.548043   67149 cri.go:89] found id: ""
	I1028 18:31:45.548071   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.548082   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:45.548090   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:45.548153   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:45.581525   67149 cri.go:89] found id: ""
	I1028 18:31:45.581552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.581563   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:45.581570   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:45.581629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:45.622258   67149 cri.go:89] found id: ""
	I1028 18:31:45.622282   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.622290   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:45.622296   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:45.622353   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:45.661255   67149 cri.go:89] found id: ""
	I1028 18:31:45.661275   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.661284   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:45.661292   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:45.661304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:45.675209   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:45.675242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:45.737546   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.737573   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:45.737592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:45.816012   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:45.816043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:45.854135   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:45.854167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:44.612155   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.612875   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:44.402557   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.902339   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:45.371918   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:47.872710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.875644   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:48.406233   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:48.418950   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:48.419001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:48.452933   67149 cri.go:89] found id: ""
	I1028 18:31:48.452952   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.452961   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:48.452975   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:48.453034   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:48.489604   67149 cri.go:89] found id: ""
	I1028 18:31:48.489630   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.489640   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:48.489648   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:48.489706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:48.525463   67149 cri.go:89] found id: ""
	I1028 18:31:48.525493   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.525504   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:48.525511   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:48.525566   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:48.559266   67149 cri.go:89] found id: ""
	I1028 18:31:48.559294   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.559302   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:48.559308   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:48.559363   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:48.592670   67149 cri.go:89] found id: ""
	I1028 18:31:48.592695   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.592706   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:48.592714   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:48.592769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:48.627175   67149 cri.go:89] found id: ""
	I1028 18:31:48.627196   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.627205   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:48.627213   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:48.627260   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:48.661864   67149 cri.go:89] found id: ""
	I1028 18:31:48.661887   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.661895   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:48.661901   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:48.661946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:48.696731   67149 cri.go:89] found id: ""
	I1028 18:31:48.696756   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.696765   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:48.696775   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:48.696788   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.745390   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:48.745417   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:48.759218   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:48.759241   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:48.830299   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:48.830331   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:48.830349   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:48.909934   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:48.909963   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:49.112884   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.613217   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.402707   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.903103   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:52.373283   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.872603   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.451597   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:51.464889   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:51.464943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:51.499962   67149 cri.go:89] found id: ""
	I1028 18:31:51.499990   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.500001   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:51.500010   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:51.500069   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:51.532341   67149 cri.go:89] found id: ""
	I1028 18:31:51.532370   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.532380   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:51.532388   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:51.532443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:51.565531   67149 cri.go:89] found id: ""
	I1028 18:31:51.565554   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.565561   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:51.565567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:51.565614   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:51.602859   67149 cri.go:89] found id: ""
	I1028 18:31:51.602882   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.602894   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:51.602899   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:51.602943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:51.639896   67149 cri.go:89] found id: ""
	I1028 18:31:51.639915   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.639922   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:51.639928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:51.639972   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:51.675728   67149 cri.go:89] found id: ""
	I1028 18:31:51.675755   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.675762   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:51.675768   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:51.675825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:51.710285   67149 cri.go:89] found id: ""
	I1028 18:31:51.710312   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.710320   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:51.710326   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:51.710374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:51.744527   67149 cri.go:89] found id: ""
	I1028 18:31:51.744551   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.744560   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:51.744570   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:51.744584   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.780580   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:51.780614   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:51.832979   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:51.833008   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:51.846389   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:51.846415   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:51.918177   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:51.918196   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:51.918210   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.493806   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:54.506468   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:54.506526   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:54.540500   67149 cri.go:89] found id: ""
	I1028 18:31:54.540527   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.540537   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:54.540544   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:54.540601   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:54.573399   67149 cri.go:89] found id: ""
	I1028 18:31:54.573428   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.573438   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:54.573448   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:54.573509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:54.606227   67149 cri.go:89] found id: ""
	I1028 18:31:54.606262   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.606272   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:54.606278   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:54.606338   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:54.641143   67149 cri.go:89] found id: ""
	I1028 18:31:54.641163   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.641172   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:54.641179   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:54.641238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:54.674269   67149 cri.go:89] found id: ""
	I1028 18:31:54.674292   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.674300   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:54.674306   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:54.674352   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:54.707160   67149 cri.go:89] found id: ""
	I1028 18:31:54.707183   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.707191   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:54.707197   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:54.707242   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:54.746522   67149 cri.go:89] found id: ""
	I1028 18:31:54.746544   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.746552   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:54.746558   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:54.746613   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:54.779315   67149 cri.go:89] found id: ""
	I1028 18:31:54.779341   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.779348   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:54.779356   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:54.779367   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:54.830987   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:54.831017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:54.844846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:54.844871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:54.913540   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:54.913558   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:54.913568   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.994220   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:54.994250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:54.112785   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.114029   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.401657   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.402726   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.371756   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:59.372308   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.532820   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:57.545394   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:57.545454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:57.582329   67149 cri.go:89] found id: ""
	I1028 18:31:57.582355   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.582365   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:57.582372   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:57.582438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:57.616082   67149 cri.go:89] found id: ""
	I1028 18:31:57.616107   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.616115   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:57.616123   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:57.616167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:57.650118   67149 cri.go:89] found id: ""
	I1028 18:31:57.650144   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.650153   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:57.650162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:57.650215   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:57.684801   67149 cri.go:89] found id: ""
	I1028 18:31:57.684823   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.684831   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:57.684839   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:57.684887   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:57.722396   67149 cri.go:89] found id: ""
	I1028 18:31:57.722423   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.722431   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:57.722437   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:57.722516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:57.759779   67149 cri.go:89] found id: ""
	I1028 18:31:57.759802   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.759809   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:57.759818   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:57.759861   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:57.793977   67149 cri.go:89] found id: ""
	I1028 18:31:57.794034   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.794045   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:57.794053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:57.794117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:57.831104   67149 cri.go:89] found id: ""
	I1028 18:31:57.831130   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.831140   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:57.831151   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:57.831164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:57.920155   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:57.920174   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:57.920184   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:57.999677   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:57.999709   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:58.036647   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:58.036673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:58.088299   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:58.088333   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.601832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:00.615434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:00.615491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:00.653344   67149 cri.go:89] found id: ""
	I1028 18:32:00.653372   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.653383   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:00.653390   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:00.653450   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:00.693086   67149 cri.go:89] found id: ""
	I1028 18:32:00.693111   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.693122   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:00.693130   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:00.693188   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:00.728129   67149 cri.go:89] found id: ""
	I1028 18:32:00.728157   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.728167   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:00.728181   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:00.728243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:00.760540   67149 cri.go:89] found id: ""
	I1028 18:32:00.760568   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.760579   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:00.760586   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:00.760654   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:00.796633   67149 cri.go:89] found id: ""
	I1028 18:32:00.796662   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.796672   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:00.796680   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:00.796740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:00.829924   67149 cri.go:89] found id: ""
	I1028 18:32:00.829954   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.829966   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:00.829974   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:00.830028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:00.861565   67149 cri.go:89] found id: ""
	I1028 18:32:00.861586   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.861593   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:00.861599   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:00.861655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:00.894129   67149 cri.go:89] found id: ""
	I1028 18:32:00.894154   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.894162   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:00.894169   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:00.894180   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.908303   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:00.908331   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:00.974521   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:00.974543   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:00.974557   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:58.612554   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.612655   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:58.901908   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.902851   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.872423   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.873235   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.048113   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:01.048140   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:01.086657   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:01.086731   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.639781   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:03.652239   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:03.652291   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:03.687098   67149 cri.go:89] found id: ""
	I1028 18:32:03.687120   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.687129   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:03.687135   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:03.687181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:03.722176   67149 cri.go:89] found id: ""
	I1028 18:32:03.722206   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.722217   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:03.722225   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:03.722282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:03.757489   67149 cri.go:89] found id: ""
	I1028 18:32:03.757512   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.757520   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:03.757526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:03.757571   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:03.795359   67149 cri.go:89] found id: ""
	I1028 18:32:03.795400   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.795411   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:03.795429   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:03.795489   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:03.830919   67149 cri.go:89] found id: ""
	I1028 18:32:03.830945   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.830953   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:03.830958   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:03.831008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:03.863396   67149 cri.go:89] found id: ""
	I1028 18:32:03.863425   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.863437   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:03.863445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:03.863516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:03.897085   67149 cri.go:89] found id: ""
	I1028 18:32:03.897112   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.897121   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:03.897128   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:03.897189   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:03.929439   67149 cri.go:89] found id: ""
	I1028 18:32:03.929467   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.929478   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:03.929487   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:03.929503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.982917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:03.982943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:03.996333   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:03.996355   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:04.062786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:04.062813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:04.062827   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:04.143988   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:04.144016   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:03.113499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.612544   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.620294   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.402246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.402730   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.904429   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.373120   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:08.871662   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.683977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:06.696605   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:06.696680   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:06.733031   67149 cri.go:89] found id: ""
	I1028 18:32:06.733060   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.733070   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:06.733078   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:06.733138   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:06.769196   67149 cri.go:89] found id: ""
	I1028 18:32:06.769218   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.769225   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:06.769231   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:06.769280   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:06.806938   67149 cri.go:89] found id: ""
	I1028 18:32:06.806959   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.806966   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:06.806972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:06.807017   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:06.839506   67149 cri.go:89] found id: ""
	I1028 18:32:06.839528   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.839537   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:06.839542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:06.839587   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:06.878275   67149 cri.go:89] found id: ""
	I1028 18:32:06.878300   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.878309   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:06.878317   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:06.878382   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:06.916336   67149 cri.go:89] found id: ""
	I1028 18:32:06.916366   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.916374   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:06.916381   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:06.916434   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:06.971413   67149 cri.go:89] found id: ""
	I1028 18:32:06.971435   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.971443   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:06.971449   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:06.971494   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:07.004432   67149 cri.go:89] found id: ""
	I1028 18:32:07.004464   67149 logs.go:282] 0 containers: []
	W1028 18:32:07.004485   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:07.004496   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:07.004509   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:07.081741   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:07.081780   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:07.122022   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:07.122053   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:07.169470   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:07.169496   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:07.183433   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:07.183459   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:07.251765   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:09.752773   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:09.766042   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:09.766119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:09.802881   67149 cri.go:89] found id: ""
	I1028 18:32:09.802911   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.802923   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:09.802930   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:09.802987   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:09.840269   67149 cri.go:89] found id: ""
	I1028 18:32:09.840292   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.840300   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:09.840305   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:09.840370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:09.874654   67149 cri.go:89] found id: ""
	I1028 18:32:09.874679   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.874689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:09.874696   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:09.874752   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:09.910328   67149 cri.go:89] found id: ""
	I1028 18:32:09.910350   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.910358   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:09.910365   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:09.910425   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:09.942717   67149 cri.go:89] found id: ""
	I1028 18:32:09.942744   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.942752   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:09.942757   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:09.942814   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:09.975644   67149 cri.go:89] found id: ""
	I1028 18:32:09.975674   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.975685   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:09.975692   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:09.975750   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:10.008257   67149 cri.go:89] found id: ""
	I1028 18:32:10.008294   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.008305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:10.008313   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:10.008373   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:10.041678   67149 cri.go:89] found id: ""
	I1028 18:32:10.041705   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.041716   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:10.041726   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:10.041739   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:10.090474   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:10.090503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:10.103846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:10.103874   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:10.172819   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:10.172847   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:10.172862   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:10.251927   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:10.251955   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:10.112553   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.113090   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:10.401890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.902888   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:11.371860   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:13.373112   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.795985   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:12.810859   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:12.810921   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:12.849897   67149 cri.go:89] found id: ""
	I1028 18:32:12.849925   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.849934   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:12.849940   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:12.850003   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:12.883007   67149 cri.go:89] found id: ""
	I1028 18:32:12.883034   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.883045   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:12.883052   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:12.883111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:12.917458   67149 cri.go:89] found id: ""
	I1028 18:32:12.917485   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.917496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:12.917503   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:12.917561   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:12.950531   67149 cri.go:89] found id: ""
	I1028 18:32:12.950558   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.950568   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:12.950576   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:12.950631   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:12.983902   67149 cri.go:89] found id: ""
	I1028 18:32:12.983929   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.983937   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:12.983943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:12.983986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:13.017486   67149 cri.go:89] found id: ""
	I1028 18:32:13.017513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.017521   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:13.017526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:13.017582   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:13.050553   67149 cri.go:89] found id: ""
	I1028 18:32:13.050582   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.050594   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:13.050601   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:13.050658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:13.083489   67149 cri.go:89] found id: ""
	I1028 18:32:13.083513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.083520   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:13.083528   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:13.083537   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:13.137451   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:13.137482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:13.153154   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:13.153179   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:13.221043   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:13.221066   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:13.221080   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:13.299930   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:13.299960   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:15.850484   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:15.862930   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:15.862982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:15.895625   67149 cri.go:89] found id: ""
	I1028 18:32:15.895643   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.895651   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:15.895657   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:15.895701   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:15.928073   67149 cri.go:89] found id: ""
	I1028 18:32:15.928103   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.928113   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:15.928120   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:15.928180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:15.962261   67149 cri.go:89] found id: ""
	I1028 18:32:15.962282   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.962290   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:15.962295   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:15.962342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:15.999177   67149 cri.go:89] found id: ""
	I1028 18:32:15.999206   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.999216   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:15.999224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:15.999282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:16.033098   67149 cri.go:89] found id: ""
	I1028 18:32:16.033126   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.033138   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:16.033145   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:16.033208   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:14.612739   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.112266   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.401576   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.401773   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:18.372059   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:16.067049   67149 cri.go:89] found id: ""
	I1028 18:32:16.067071   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.067083   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:16.067089   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:16.067145   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:16.106936   67149 cri.go:89] found id: ""
	I1028 18:32:16.106970   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.106981   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:16.106988   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:16.107044   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:16.141702   67149 cri.go:89] found id: ""
	I1028 18:32:16.141729   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.141741   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:16.141751   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:16.141762   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:16.178772   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:16.178803   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:16.230851   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:16.230878   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:16.244489   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:16.244514   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:16.319362   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:16.319389   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:16.319405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:18.899694   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:18.913287   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:18.913358   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:18.954136   67149 cri.go:89] found id: ""
	I1028 18:32:18.954158   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.954165   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:18.954170   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:18.954218   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:18.987427   67149 cri.go:89] found id: ""
	I1028 18:32:18.987449   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.987457   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:18.987462   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:18.987505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:19.022067   67149 cri.go:89] found id: ""
	I1028 18:32:19.022099   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.022110   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:19.022118   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:19.022167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:19.054533   67149 cri.go:89] found id: ""
	I1028 18:32:19.054560   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.054570   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:19.054578   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:19.054644   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:19.099324   67149 cri.go:89] found id: ""
	I1028 18:32:19.099356   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.099367   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:19.099375   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:19.099436   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:19.146437   67149 cri.go:89] found id: ""
	I1028 18:32:19.146463   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.146470   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:19.146478   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:19.146540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:19.192027   67149 cri.go:89] found id: ""
	I1028 18:32:19.192053   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.192070   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:19.192078   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:19.192140   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:19.228411   67149 cri.go:89] found id: ""
	I1028 18:32:19.228437   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.228447   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:19.228457   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:19.228480   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:19.313151   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:19.313183   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:19.352117   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:19.352142   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:19.402772   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:19.402805   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:19.416148   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:19.416167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:19.483098   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:19.112720   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.611924   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:19.403635   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.902116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:20.872280   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:22.872726   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.983420   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:21.997129   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:21.997180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:22.035600   67149 cri.go:89] found id: ""
	I1028 18:32:22.035622   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.035631   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:22.035637   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:22.035684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:22.073413   67149 cri.go:89] found id: ""
	I1028 18:32:22.073440   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.073450   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:22.073458   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:22.073505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:22.108637   67149 cri.go:89] found id: ""
	I1028 18:32:22.108663   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.108673   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:22.108682   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:22.108740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:22.145837   67149 cri.go:89] found id: ""
	I1028 18:32:22.145860   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.145867   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:22.145873   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:22.145928   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:22.183830   67149 cri.go:89] found id: ""
	I1028 18:32:22.183855   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.183864   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:22.183869   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:22.183917   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:22.221402   67149 cri.go:89] found id: ""
	I1028 18:32:22.221423   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.221430   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:22.221436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:22.221484   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:22.262193   67149 cri.go:89] found id: ""
	I1028 18:32:22.262220   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.262229   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:22.262234   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:22.262297   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:22.298774   67149 cri.go:89] found id: ""
	I1028 18:32:22.298797   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.298808   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:22.298819   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:22.298831   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:22.348677   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:22.348716   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:22.362199   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:22.362220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:22.429304   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:22.429327   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:22.429345   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:22.511591   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:22.511623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.049119   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:25.063910   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:25.063970   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:25.099795   67149 cri.go:89] found id: ""
	I1028 18:32:25.099822   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.099833   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:25.099840   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:25.099898   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:25.137957   67149 cri.go:89] found id: ""
	I1028 18:32:25.137985   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.137995   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:25.138002   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:25.138063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:25.174687   67149 cri.go:89] found id: ""
	I1028 18:32:25.174715   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.174726   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:25.174733   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:25.174795   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:25.207039   67149 cri.go:89] found id: ""
	I1028 18:32:25.207067   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.207077   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:25.207084   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:25.207130   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:25.239961   67149 cri.go:89] found id: ""
	I1028 18:32:25.239990   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.239998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:25.240004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:25.240055   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:25.273823   67149 cri.go:89] found id: ""
	I1028 18:32:25.273848   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.273858   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:25.273865   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:25.273925   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:25.310725   67149 cri.go:89] found id: ""
	I1028 18:32:25.310754   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.310765   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:25.310772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:25.310830   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:25.348724   67149 cri.go:89] found id: ""
	I1028 18:32:25.348749   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.348760   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:25.348770   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:25.348784   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:25.430213   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:25.430243   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.472233   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:25.472263   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:25.525648   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:25.525676   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:25.538697   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:25.538721   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:25.606779   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:23.612901   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.112494   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:23.902733   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.402271   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:25.372428   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:27.870461   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:29.871824   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.107877   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:28.122241   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:28.122296   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:28.157042   67149 cri.go:89] found id: ""
	I1028 18:32:28.157070   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.157082   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:28.157089   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:28.157142   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:28.190625   67149 cri.go:89] found id: ""
	I1028 18:32:28.190648   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.190658   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:28.190666   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:28.190724   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:28.224528   67149 cri.go:89] found id: ""
	I1028 18:32:28.224551   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.224559   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:28.224565   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:28.224609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:28.265073   67149 cri.go:89] found id: ""
	I1028 18:32:28.265100   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.265110   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:28.265116   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:28.265174   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:28.302598   67149 cri.go:89] found id: ""
	I1028 18:32:28.302623   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.302633   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:28.302640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:28.302697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:28.339757   67149 cri.go:89] found id: ""
	I1028 18:32:28.339781   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.339789   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:28.339794   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:28.339846   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:28.375185   67149 cri.go:89] found id: ""
	I1028 18:32:28.375213   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.375224   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:28.375231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:28.375294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:28.413292   67149 cri.go:89] found id: ""
	I1028 18:32:28.413316   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.413334   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:28.413344   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:28.413376   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:28.464069   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:28.464098   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:28.478275   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:28.478299   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:28.546483   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.546504   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:28.546515   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:28.623015   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:28.623041   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:28.613303   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.111518   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.403789   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:30.903113   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:32.371951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:34.372820   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.161570   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:31.175056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:31.175119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:31.210163   67149 cri.go:89] found id: ""
	I1028 18:32:31.210187   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.210199   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:31.210207   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:31.210264   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:31.244605   67149 cri.go:89] found id: ""
	I1028 18:32:31.244630   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.244637   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:31.244643   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:31.244688   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:31.280793   67149 cri.go:89] found id: ""
	I1028 18:32:31.280818   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.280827   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:31.280833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:31.280890   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:31.314616   67149 cri.go:89] found id: ""
	I1028 18:32:31.314641   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.314649   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:31.314654   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:31.314709   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:31.349386   67149 cri.go:89] found id: ""
	I1028 18:32:31.349410   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.349417   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:31.349423   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:31.349469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:31.382831   67149 cri.go:89] found id: ""
	I1028 18:32:31.382861   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.382871   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:31.382879   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:31.382924   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:31.417365   67149 cri.go:89] found id: ""
	I1028 18:32:31.417391   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.417400   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:31.417410   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:31.417469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:31.450631   67149 cri.go:89] found id: ""
	I1028 18:32:31.450660   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.450672   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:31.450683   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:31.450697   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.488932   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:31.488959   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:31.539335   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:31.539361   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:31.552304   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:31.552328   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:31.629291   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:31.629308   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:31.629323   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.207517   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:34.221231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:34.221310   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:34.255342   67149 cri.go:89] found id: ""
	I1028 18:32:34.255365   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.255373   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:34.255379   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:34.255438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:34.303802   67149 cri.go:89] found id: ""
	I1028 18:32:34.303827   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.303836   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:34.303843   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:34.303896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:34.339531   67149 cri.go:89] found id: ""
	I1028 18:32:34.339568   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.339579   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:34.339589   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:34.339653   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:34.374063   67149 cri.go:89] found id: ""
	I1028 18:32:34.374084   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.374094   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:34.374102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:34.374155   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:34.410880   67149 cri.go:89] found id: ""
	I1028 18:32:34.410909   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.410918   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:34.410924   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:34.410971   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:34.445372   67149 cri.go:89] found id: ""
	I1028 18:32:34.445397   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.445408   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:34.445416   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:34.445474   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:34.477820   67149 cri.go:89] found id: ""
	I1028 18:32:34.477844   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.477851   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:34.477857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:34.477909   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:34.517581   67149 cri.go:89] found id: ""
	I1028 18:32:34.517602   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.517609   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:34.517618   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:34.517632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:34.530407   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:34.530430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:34.599055   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:34.599083   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:34.599096   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.681579   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:34.681612   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:34.720523   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:34.720550   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:33.111858   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.112216   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.613521   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:33.401782   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.402544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.901848   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:36.871451   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.372642   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.272697   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:37.289091   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:37.289159   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:37.321600   67149 cri.go:89] found id: ""
	I1028 18:32:37.321628   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.321639   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:37.321647   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:37.321704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:37.353296   67149 cri.go:89] found id: ""
	I1028 18:32:37.353324   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.353337   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:37.353343   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:37.353400   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:37.386299   67149 cri.go:89] found id: ""
	I1028 18:32:37.386321   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.386328   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:37.386333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:37.386401   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:37.420992   67149 cri.go:89] found id: ""
	I1028 18:32:37.421026   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.421039   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:37.421047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:37.421117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:37.456174   67149 cri.go:89] found id: ""
	I1028 18:32:37.456206   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.456217   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:37.456224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:37.456284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:37.491796   67149 cri.go:89] found id: ""
	I1028 18:32:37.491819   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.491827   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:37.491833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:37.491878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:37.529002   67149 cri.go:89] found id: ""
	I1028 18:32:37.529028   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.529039   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:37.529047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:37.529111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:37.568967   67149 cri.go:89] found id: ""
	I1028 18:32:37.568993   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.569001   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:37.569010   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:37.569022   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:37.640041   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:37.640065   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:37.640076   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:37.725490   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:37.725524   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:37.771858   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:37.771879   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.821240   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:37.821271   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.334946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:40.349147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:40.349216   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:40.383931   67149 cri.go:89] found id: ""
	I1028 18:32:40.383956   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.383966   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:40.383973   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:40.384028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:40.419877   67149 cri.go:89] found id: ""
	I1028 18:32:40.419905   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.419915   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:40.419922   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:40.419978   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:40.453659   67149 cri.go:89] found id: ""
	I1028 18:32:40.453681   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.453689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:40.453695   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:40.453744   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:40.486299   67149 cri.go:89] found id: ""
	I1028 18:32:40.486326   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.486343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:40.486350   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:40.486407   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:40.518309   67149 cri.go:89] found id: ""
	I1028 18:32:40.518334   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.518344   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:40.518351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:40.518402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:40.549008   67149 cri.go:89] found id: ""
	I1028 18:32:40.549040   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.549049   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:40.549055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:40.549108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:40.586157   67149 cri.go:89] found id: ""
	I1028 18:32:40.586177   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.586184   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:40.586189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:40.586232   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:40.621107   67149 cri.go:89] found id: ""
	I1028 18:32:40.621133   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.621144   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:40.621153   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:40.621164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.633793   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:40.633816   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:40.700370   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:40.700393   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:40.700405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:40.780964   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:40.780993   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:40.819904   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:40.819928   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:40.112755   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:42.113116   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.903476   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.904639   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.872360   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.371399   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:43.371487   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:43.384387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:43.384445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:43.419889   67149 cri.go:89] found id: ""
	I1028 18:32:43.419922   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.419931   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:43.419937   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:43.419997   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:43.455177   67149 cri.go:89] found id: ""
	I1028 18:32:43.455209   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.455219   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:43.455227   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:43.455295   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:43.493070   67149 cri.go:89] found id: ""
	I1028 18:32:43.493094   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.493104   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:43.493111   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:43.493170   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:43.526164   67149 cri.go:89] found id: ""
	I1028 18:32:43.526191   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.526199   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:43.526205   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:43.526254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:43.559225   67149 cri.go:89] found id: ""
	I1028 18:32:43.559252   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.559263   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:43.559270   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:43.559323   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:43.597178   67149 cri.go:89] found id: ""
	I1028 18:32:43.597198   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.597206   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:43.597212   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:43.597276   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:43.633179   67149 cri.go:89] found id: ""
	I1028 18:32:43.633200   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.633209   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:43.633214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:43.633290   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:43.669567   67149 cri.go:89] found id: ""
	I1028 18:32:43.669596   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.669605   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:43.669615   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:43.669631   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:43.737618   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:43.737638   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:43.737650   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:43.821394   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:43.821425   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:43.859924   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:43.859950   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.913539   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:43.913566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:44.611539   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.613781   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.401399   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.401930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.371445   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.372075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.429021   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:46.443137   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:46.443197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:46.480363   67149 cri.go:89] found id: ""
	I1028 18:32:46.480385   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.480394   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:46.480400   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:46.480452   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:46.514702   67149 cri.go:89] found id: ""
	I1028 18:32:46.514731   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.514738   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:46.514744   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:46.514796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:46.546829   67149 cri.go:89] found id: ""
	I1028 18:32:46.546857   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.546868   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:46.546874   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:46.546920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:46.580372   67149 cri.go:89] found id: ""
	I1028 18:32:46.580398   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.580407   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:46.580415   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:46.580491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:46.615455   67149 cri.go:89] found id: ""
	I1028 18:32:46.615479   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.615489   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:46.615497   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:46.615556   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:46.649547   67149 cri.go:89] found id: ""
	I1028 18:32:46.649570   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.649577   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:46.649583   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:46.649641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:46.684744   67149 cri.go:89] found id: ""
	I1028 18:32:46.684768   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.684779   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:46.684787   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:46.684852   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:46.725530   67149 cri.go:89] found id: ""
	I1028 18:32:46.725558   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.725569   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:46.725578   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:46.725592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:46.794487   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:46.794506   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:46.794517   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:46.881407   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:46.881438   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:46.921649   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:46.921671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:46.972915   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:46.972947   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.486835   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:49.501445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:49.501509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:49.537356   67149 cri.go:89] found id: ""
	I1028 18:32:49.537377   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.537384   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:49.537389   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:49.537443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:49.568514   67149 cri.go:89] found id: ""
	I1028 18:32:49.568541   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.568549   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:49.568555   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:49.568610   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:49.602300   67149 cri.go:89] found id: ""
	I1028 18:32:49.602324   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.602333   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:49.602342   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:49.602390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:49.640326   67149 cri.go:89] found id: ""
	I1028 18:32:49.640356   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.640366   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:49.640376   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:49.640437   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:49.675145   67149 cri.go:89] found id: ""
	I1028 18:32:49.675175   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.675183   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:49.675189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:49.675235   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:49.711104   67149 cri.go:89] found id: ""
	I1028 18:32:49.711129   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.711139   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:49.711147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:49.711206   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:49.748316   67149 cri.go:89] found id: ""
	I1028 18:32:49.748366   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.748378   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:49.748385   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:49.748441   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:49.781620   67149 cri.go:89] found id: ""
	I1028 18:32:49.781646   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.781656   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:49.781665   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:49.781679   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.795119   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:49.795143   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:49.870438   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:49.870519   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:49.870539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:49.956845   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:49.956875   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:49.993067   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:49.993097   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:49.112102   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:51.612691   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.901950   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.902354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.903627   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.871412   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.871499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:54.874588   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.543260   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:52.556524   67149 kubeadm.go:597] duration metric: took 4m2.404527005s to restartPrimaryControlPlane
	W1028 18:32:52.556602   67149 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:52.556639   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:32:53.011065   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:32:53.026226   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:32:53.035868   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:32:53.045257   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:32:53.045271   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:32:53.045302   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:32:53.054383   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:32:53.054430   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:32:53.063665   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:32:53.073006   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:32:53.073054   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:32:53.083156   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.092700   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:32:53.092742   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.102374   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:32:53.112072   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:32:53.112121   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:32:53.122102   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:32:53.347625   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:32:53.613118   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:56.111841   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:55.402354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.902406   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.371909   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:59.872630   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.112962   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:00.613499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.896006   66801 pod_ready.go:82] duration metric: took 4m0.00005957s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	E1028 18:32:58.896033   66801 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:32:58.896052   66801 pod_ready.go:39] duration metric: took 4m13.055181811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:32:58.896092   66801 kubeadm.go:597] duration metric: took 4m21.540757653s to restartPrimaryControlPlane
	W1028 18:32:58.896147   66801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:58.896173   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:02.372443   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:04.871981   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:03.113038   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:05.114488   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:07.612365   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:06.872705   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.371018   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.612856   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:12.114228   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:11.371831   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:13.372636   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:14.613213   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.113328   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:15.871907   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.872203   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:19.612892   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:21.613052   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:20.370964   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:22.371880   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:24.372718   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:25.039296   66801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.14309835s)
	I1028 18:33:25.039378   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:25.056172   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:25.066775   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:25.077717   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:25.077734   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:25.077770   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:33:25.086924   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:25.086968   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:25.096867   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:33:25.106162   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:25.106205   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:25.117015   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.126191   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:25.126245   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.135691   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:33:25.144827   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:25.144867   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:25.153834   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:25.201789   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:33:25.201866   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:33:25.306568   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:33:25.306717   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:33:25.306845   66801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:33:25.314339   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:33:25.316173   66801 out.go:235]   - Generating certificates and keys ...
	I1028 18:33:25.316271   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:33:25.316345   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:33:25.316463   66801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:33:25.316571   66801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:33:25.316688   66801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:33:25.316768   66801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:33:25.316857   66801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:33:25.316943   66801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:33:25.317047   66801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:33:25.317149   66801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:33:25.317209   66801 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:33:25.317299   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:33:25.643056   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:33:25.723345   66801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:33:25.831628   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:33:25.908255   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:33:26.215149   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:33:26.215654   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:33:26.218291   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:33:24.111834   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.113295   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.220065   66801 out.go:235]   - Booting up control plane ...
	I1028 18:33:26.220170   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:33:26.220251   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:33:26.220336   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:33:26.239633   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:33:26.245543   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:33:26.245612   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:33:26.378154   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:33:26.378332   66801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:33:26.879957   66801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.937575ms
	I1028 18:33:26.880090   66801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:33:26.365771   67489 pod_ready.go:82] duration metric: took 4m0.000286415s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:26.365796   67489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:26.365812   67489 pod_ready.go:39] duration metric: took 4m12.539631154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:26.365837   67489 kubeadm.go:597] duration metric: took 4m19.835720994s to restartPrimaryControlPlane
	W1028 18:33:26.365884   67489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:26.365910   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:31.882091   66801 kubeadm.go:310] [api-check] The API server is healthy after 5.002114527s
	I1028 18:33:31.897915   66801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:33:31.914311   66801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:33:31.943604   66801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:33:31.943859   66801 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-051152 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:33:31.954350   66801 kubeadm.go:310] [bootstrap-token] Using token: h7eyzq.87sgylc03ke6zhfy
	I1028 18:33:28.613480   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.113034   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.955444   66801 out.go:235]   - Configuring RBAC rules ...
	I1028 18:33:31.955591   66801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:33:31.960749   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:33:31.967695   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:33:31.970863   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:33:31.973924   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:33:31.979191   66801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:33:32.291512   66801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:33:32.714999   66801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:33:33.291889   66801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:33:33.293069   66801 kubeadm.go:310] 
	I1028 18:33:33.293167   66801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:33:33.293182   66801 kubeadm.go:310] 
	I1028 18:33:33.293255   66801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:33:33.293268   66801 kubeadm.go:310] 
	I1028 18:33:33.293307   66801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:33:33.293372   66801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:33:33.293435   66801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:33:33.293447   66801 kubeadm.go:310] 
	I1028 18:33:33.293518   66801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:33:33.293526   66801 kubeadm.go:310] 
	I1028 18:33:33.293595   66801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:33:33.293624   66801 kubeadm.go:310] 
	I1028 18:33:33.293712   66801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:33:33.293842   66801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:33:33.293946   66801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:33:33.293960   66801 kubeadm.go:310] 
	I1028 18:33:33.294117   66801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:33:33.294196   66801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:33:33.294203   66801 kubeadm.go:310] 
	I1028 18:33:33.294276   66801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294385   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:33:33.294414   66801 kubeadm.go:310] 	--control-plane 
	I1028 18:33:33.294427   66801 kubeadm.go:310] 
	I1028 18:33:33.294515   66801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:33:33.294525   66801 kubeadm.go:310] 
	I1028 18:33:33.294629   66801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294774   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:33:33.295715   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:33:33.295839   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:33:33.295852   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:33:33.297447   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:33:33.298607   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:33:33.311113   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:33:33.329576   66801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:33:33.329634   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:33.329680   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-051152 minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=no-preload-051152 minikube.k8s.io/primary=true
	I1028 18:33:33.355186   66801 ops.go:34] apiserver oom_adj: -16
	I1028 18:33:33.509281   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.009672   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.509515   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.010084   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.509359   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.009689   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.509671   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.009884   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.510004   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.615853   66801 kubeadm.go:1113] duration metric: took 4.286272328s to wait for elevateKubeSystemPrivileges
	I1028 18:33:37.615890   66801 kubeadm.go:394] duration metric: took 5m0.313982235s to StartCluster
	I1028 18:33:37.615913   66801 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.616000   66801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:33:37.618418   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.618741   66801 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:33:37.618857   66801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:33:37.618951   66801 addons.go:69] Setting storage-provisioner=true in profile "no-preload-051152"
	I1028 18:33:37.618963   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:33:37.618975   66801 addons.go:69] Setting default-storageclass=true in profile "no-preload-051152"
	I1028 18:33:37.619001   66801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-051152"
	I1028 18:33:37.618973   66801 addons.go:234] Setting addon storage-provisioner=true in "no-preload-051152"
	W1028 18:33:37.619019   66801 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:33:37.619012   66801 addons.go:69] Setting metrics-server=true in profile "no-preload-051152"
	I1028 18:33:37.619043   66801 addons.go:234] Setting addon metrics-server=true in "no-preload-051152"
	I1028 18:33:37.619047   66801 host.go:66] Checking if "no-preload-051152" exists ...
	W1028 18:33:37.619056   66801 addons.go:243] addon metrics-server should already be in state true
	I1028 18:33:37.619097   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.619417   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619446   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619472   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619488   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619487   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619521   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.620738   66801 out.go:177] * Verifying Kubernetes components...
	I1028 18:33:37.622165   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:33:37.636006   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1028 18:33:37.636285   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I1028 18:33:37.636536   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.636621   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.637055   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637082   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637344   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637368   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637419   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637634   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637811   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.638112   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.638157   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.638738   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1028 18:33:37.639176   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.639609   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.639632   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.639918   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.640333   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.640375   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.641571   66801 addons.go:234] Setting addon default-storageclass=true in "no-preload-051152"
	W1028 18:33:37.641592   66801 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:33:37.641620   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.641947   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.641981   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.657758   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I1028 18:33:37.657834   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I1028 18:33:37.657942   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I1028 18:33:37.658187   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658335   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658739   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658752   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658877   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658896   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658931   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.659309   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659358   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659409   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.659428   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.659552   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.659934   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.659964   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.660163   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.660406   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.661568   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.662429   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.663435   66801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:33:37.664414   66801 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:33:33.613699   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:36.111831   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:37.665306   66801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.665324   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:33:37.665343   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.666055   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:33:37.666073   66801 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:33:37.666092   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.668918   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669385   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669519   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.669543   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669754   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.669942   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.670093   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.670266   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.670513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.670556   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.670719   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.670851   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.671014   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.671115   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.677419   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I1028 18:33:37.677828   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.678184   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.678201   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.678476   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.678686   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.680177   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.680403   66801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.680420   66801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:33:37.680437   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.683981   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.684534   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.685007   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.685153   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.685307   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.832104   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:33:37.859406   66801 node_ready.go:35] waiting up to 6m0s for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873437   66801 node_ready.go:49] node "no-preload-051152" has status "Ready":"True"
	I1028 18:33:37.873460   66801 node_ready.go:38] duration metric: took 14.023686ms for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873470   66801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:37.888286   66801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:37.917341   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:33:37.917363   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:33:37.948690   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:33:37.948716   66801 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:33:37.967948   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.971737   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.998758   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:37.998782   66801 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:33:38.034907   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:38.924695   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924720   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.924762   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924828   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925048   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925079   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925093   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925105   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925128   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925131   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925142   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925153   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925154   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925164   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925372   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925397   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925382   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926852   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926857   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.926872   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.955462   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.955492   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.955858   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.955938   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.955953   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373144   66801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.338192413s)
	I1028 18:33:39.373209   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373224   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373512   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373529   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373537   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373544   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373761   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373775   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373785   66801 addons.go:475] Verifying addon metrics-server=true in "no-preload-051152"
	I1028 18:33:39.375584   66801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:33:38.113078   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:40.612141   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.612763   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:39.377031   66801 addons.go:510] duration metric: took 1.758176418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:33:39.906691   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.396083   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:44.894264   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:46.396937   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.397023   66801 pod_ready.go:82] duration metric: took 8.508709164s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.397048   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402560   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.402579   66801 pod_ready.go:82] duration metric: took 5.5155ms for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402588   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406630   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.406646   66801 pod_ready.go:82] duration metric: took 4.052513ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406654   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411238   66801 pod_ready.go:93] pod "kube-proxy-28qht" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.411253   66801 pod_ready.go:82] duration metric: took 4.592983ms for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411260   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414867   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.414880   66801 pod_ready.go:82] duration metric: took 3.615132ms for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414886   66801 pod_ready.go:39] duration metric: took 8.541406133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:46.414900   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:33:46.414943   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:33:46.430889   66801 api_server.go:72] duration metric: took 8.81211088s to wait for apiserver process to appear ...
	I1028 18:33:46.430907   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:33:46.430925   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:33:46.435248   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:33:46.435963   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:33:46.435978   66801 api_server.go:131] duration metric: took 5.065719ms to wait for apiserver health ...
	I1028 18:33:46.435984   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:33:46.596186   66801 system_pods.go:59] 9 kube-system pods found
	I1028 18:33:46.596222   66801 system_pods.go:61] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.596230   66801 system_pods.go:61] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.596234   66801 system_pods.go:61] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.596238   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.596242   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.596246   66801 system_pods.go:61] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.596252   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.596301   66801 system_pods.go:61] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.596317   66801 system_pods.go:61] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.596324   66801 system_pods.go:74] duration metric: took 160.335823ms to wait for pod list to return data ...
	I1028 18:33:46.596341   66801 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:33:46.793115   66801 default_sa.go:45] found service account: "default"
	I1028 18:33:46.793147   66801 default_sa.go:55] duration metric: took 196.795286ms for default service account to be created ...
	I1028 18:33:46.793157   66801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:33:46.995868   66801 system_pods.go:86] 9 kube-system pods found
	I1028 18:33:46.995899   66801 system_pods.go:89] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.995905   66801 system_pods.go:89] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.995909   66801 system_pods.go:89] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.995912   66801 system_pods.go:89] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.995917   66801 system_pods.go:89] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.995920   66801 system_pods.go:89] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.995924   66801 system_pods.go:89] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.995929   66801 system_pods.go:89] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.995934   66801 system_pods.go:89] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.995941   66801 system_pods.go:126] duration metric: took 202.778451ms to wait for k8s-apps to be running ...
	I1028 18:33:46.995946   66801 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:33:46.995990   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:47.011260   66801 system_svc.go:56] duration metric: took 15.302599ms WaitForService to wait for kubelet
	I1028 18:33:47.011285   66801 kubeadm.go:582] duration metric: took 9.392510785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:33:47.011303   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:33:47.193217   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:33:47.193239   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:33:47.193250   66801 node_conditions.go:105] duration metric: took 181.942948ms to run NodePressure ...
	I1028 18:33:47.193261   66801 start.go:241] waiting for startup goroutines ...
	I1028 18:33:47.193267   66801 start.go:246] waiting for cluster config update ...
	I1028 18:33:47.193278   66801 start.go:255] writing updated cluster config ...
	I1028 18:33:47.193529   66801 ssh_runner.go:195] Run: rm -f paused
	I1028 18:33:47.240247   66801 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:33:47.242139   66801 out.go:177] * Done! kubectl is now configured to use "no-preload-051152" cluster and "default" namespace by default
	I1028 18:33:45.112037   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:47.112764   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:48.107354   66600 pod_ready.go:82] duration metric: took 4m0.001062902s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:48.107377   66600 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:48.107395   66600 pod_ready.go:39] duration metric: took 4m13.535788316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:48.107420   66600 kubeadm.go:597] duration metric: took 4m22.316644235s to restartPrimaryControlPlane
	W1028 18:33:48.107467   66600 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:48.107490   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:52.667497   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.301566887s)
	I1028 18:33:52.667559   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:52.683580   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:52.695334   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:52.705505   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:52.705524   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:52.705569   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:33:52.714922   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:52.714969   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:52.724156   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:33:52.733125   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:52.733161   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:52.742369   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.751021   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:52.751065   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.760543   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:33:52.770939   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:52.770985   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:52.781890   67489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:52.961562   67489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:01.798408   67489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:01.798470   67489 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:01.798580   67489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:01.798724   67489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:01.798811   67489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:01.798882   67489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:01.800228   67489 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:01.800320   67489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:01.800392   67489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:01.800486   67489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:01.800580   67489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:01.800641   67489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:01.800694   67489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:01.800764   67489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:01.800842   67489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:01.800955   67489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:01.801019   67489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:01.801053   67489 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:01.801102   67489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:01.801145   67489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:01.801196   67489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:01.801252   67489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:01.801316   67489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:01.801409   67489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:01.801513   67489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:01.801605   67489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:01.802967   67489 out.go:235]   - Booting up control plane ...
	I1028 18:34:01.803061   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:01.803169   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:01.803254   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:01.803376   67489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:01.803488   67489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:01.803558   67489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:01.803685   67489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:01.803800   67489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:01.803869   67489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.148945ms
	I1028 18:34:01.803933   67489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:01.803986   67489 kubeadm.go:310] [api-check] The API server is healthy after 5.003798359s
	I1028 18:34:01.804081   67489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:01.804187   67489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:01.804240   67489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:01.804438   67489 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-692033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:01.804533   67489 kubeadm.go:310] [bootstrap-token] Using token: wy8zqj.38m6tcr6hp7sgzod
	I1028 18:34:01.805760   67489 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:01.805856   67489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:01.805949   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:01.806108   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:01.806233   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:01.806378   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:01.806464   67489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:01.806579   67489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:01.806633   67489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:01.806673   67489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:01.806679   67489 kubeadm.go:310] 
	I1028 18:34:01.806735   67489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:01.806746   67489 kubeadm.go:310] 
	I1028 18:34:01.806836   67489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:01.806844   67489 kubeadm.go:310] 
	I1028 18:34:01.806880   67489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:01.806957   67489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:01.807001   67489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:01.807007   67489 kubeadm.go:310] 
	I1028 18:34:01.807060   67489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:01.807071   67489 kubeadm.go:310] 
	I1028 18:34:01.807112   67489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:01.807118   67489 kubeadm.go:310] 
	I1028 18:34:01.807171   67489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:01.807246   67489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:01.807307   67489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:01.807313   67489 kubeadm.go:310] 
	I1028 18:34:01.807387   67489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:01.807454   67489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:01.807465   67489 kubeadm.go:310] 
	I1028 18:34:01.807538   67489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807634   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:01.807655   67489 kubeadm.go:310] 	--control-plane 
	I1028 18:34:01.807661   67489 kubeadm.go:310] 
	I1028 18:34:01.807730   67489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:01.807739   67489 kubeadm.go:310] 
	I1028 18:34:01.807810   67489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807913   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:01.807923   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:34:01.807929   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:01.809168   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:01.810293   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:01.822030   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:01.842831   67489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:01.842908   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:01.842963   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-692033 minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=default-k8s-diff-port-692033 minikube.k8s.io/primary=true
	I1028 18:34:01.875265   67489 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:02.050422   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:02.550824   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.050477   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.551245   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.051177   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.550572   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.051071   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.550926   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.638447   67489 kubeadm.go:1113] duration metric: took 3.795598924s to wait for elevateKubeSystemPrivileges
	I1028 18:34:05.638483   67489 kubeadm.go:394] duration metric: took 4m59.162037455s to StartCluster
	I1028 18:34:05.638504   67489 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.638591   67489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:05.641196   67489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.641497   67489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:05.641626   67489 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:05.641720   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:05.641730   67489 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641748   67489 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641760   67489 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:05.641776   67489 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641781   67489 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641792   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.641794   67489 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641803   67489 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:05.641804   67489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-692033"
	I1028 18:34:05.641832   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.642210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642217   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642229   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642245   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642255   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642314   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642905   67489 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:05.644361   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:05.658478   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I1028 18:34:05.658586   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1028 18:34:05.659040   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659044   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659524   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659546   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659701   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659724   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659879   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660044   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660111   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.660610   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.660648   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.661748   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1028 18:34:05.662150   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.662607   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.662627   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.662983   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.662991   67489 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.663006   67489 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:05.663029   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.663294   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663334   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.663531   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663572   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.675955   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1028 18:34:05.676345   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.676784   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.676802   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.677154   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.677358   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.678723   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I1028 18:34:05.678897   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1028 18:34:05.679025   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.679243   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679337   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679700   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679715   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.679805   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679823   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.680500   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680506   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680706   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.680834   67489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:05.681042   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.681070   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.681982   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:05.682005   67489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:05.682035   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.682363   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.683806   67489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:05.684992   67489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.685011   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:05.685029   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.686903   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.686957   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.686973   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.687218   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.687429   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.687693   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.687850   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.688516   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.688908   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.688933   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.689193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.689372   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.689513   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.689655   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.696743   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I1028 18:34:05.697029   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.697432   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.697458   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.697697   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.697843   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.699192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.699397   67489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.699405   67489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:05.699416   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.702897   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.703368   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703483   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.703667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.703841   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.703996   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.838049   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:05.857829   67489 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866141   67489 node_ready.go:49] node "default-k8s-diff-port-692033" has status "Ready":"True"
	I1028 18:34:05.866158   67489 node_ready.go:38] duration metric: took 8.296617ms for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866167   67489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:05.873027   67489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:05.927585   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:05.927608   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:05.928743   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.946390   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.961712   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:05.961734   67489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:05.993688   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:05.993711   67489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:06.097871   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:06.696189   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696226   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696195   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696300   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696696   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696713   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696697   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696721   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696735   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696742   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696750   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696722   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696794   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696984   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697000   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.697027   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697036   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.720324   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.720346   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.720649   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.720668   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262166   67489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.164245646s)
	I1028 18:34:07.262256   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262277   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262587   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262608   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262607   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262616   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262625   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262890   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262923   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262936   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262948   67489 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-692033"
	I1028 18:34:07.264414   67489 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:07.265449   67489 addons.go:510] duration metric: took 1.623834435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:07.882264   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.313629   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.206119005s)
	I1028 18:34:14.313702   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:14.329212   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:34:14.339407   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:14.349645   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:14.349669   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:14.349716   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:14.359332   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:14.359384   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:14.369627   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:14.381040   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:14.381098   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:14.390359   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.399743   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:14.399783   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.408932   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:14.417840   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:14.417876   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:14.427234   66600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:14.472502   66600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:14.472593   66600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:14.578311   66600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:14.578456   66600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:14.578576   66600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:14.586748   66600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:10.380304   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:12.878632   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.878951   67489 pod_ready.go:93] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:14.878974   67489 pod_ready.go:82] duration metric: took 9.005915421s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:14.878983   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385215   67489 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.385239   67489 pod_ready.go:82] duration metric: took 506.249352ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385250   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390412   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.390435   67489 pod_ready.go:82] duration metric: took 5.177559ms for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390448   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395252   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.395272   67489 pod_ready.go:82] duration metric: took 4.816812ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395281   67489 pod_ready.go:39] duration metric: took 9.52910413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:15.395298   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:15.395349   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:15.413693   67489 api_server.go:72] duration metric: took 9.772160727s to wait for apiserver process to appear ...
	I1028 18:34:15.413715   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:15.413734   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:34:15.417780   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:34:15.418688   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:15.418712   67489 api_server.go:131] duration metric: took 4.989226ms to wait for apiserver health ...
	I1028 18:34:15.418720   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:15.424285   67489 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:15.424306   67489 system_pods.go:61] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.424310   67489 system_pods.go:61] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.424315   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.424318   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.424323   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.424327   67489 system_pods.go:61] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.424331   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.424337   67489 system_pods.go:61] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.424344   67489 system_pods.go:61] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.424351   67489 system_pods.go:74] duration metric: took 5.625205ms to wait for pod list to return data ...
	I1028 18:34:15.424359   67489 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:15.427132   67489 default_sa.go:45] found service account: "default"
	I1028 18:34:15.427153   67489 default_sa.go:55] duration metric: took 2.788005ms for default service account to be created ...
	I1028 18:34:15.427161   67489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:15.479404   67489 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:15.479427   67489 system_pods.go:89] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.479433   67489 system_pods.go:89] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.479436   67489 system_pods.go:89] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.479443   67489 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.479448   67489 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.479453   67489 system_pods.go:89] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.479460   67489 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.479472   67489 system_pods.go:89] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.479477   67489 system_pods.go:89] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.479491   67489 system_pods.go:126] duration metric: took 52.324012ms to wait for k8s-apps to be running ...
	I1028 18:34:15.479502   67489 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:15.479548   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:15.493743   67489 system_svc.go:56] duration metric: took 14.233947ms WaitForService to wait for kubelet
	I1028 18:34:15.493772   67489 kubeadm.go:582] duration metric: took 9.852243286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:15.493796   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:15.677127   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:15.677149   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:15.677156   67489 node_conditions.go:105] duration metric: took 183.355591ms to run NodePressure ...
	I1028 18:34:15.677167   67489 start.go:241] waiting for startup goroutines ...
	I1028 18:34:15.677174   67489 start.go:246] waiting for cluster config update ...
	I1028 18:34:15.677183   67489 start.go:255] writing updated cluster config ...
	I1028 18:34:15.677419   67489 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:15.731157   67489 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:15.732912   67489 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-692033" cluster and "default" namespace by default
	I1028 18:34:14.588528   66600 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:14.588660   66600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:14.588749   66600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:14.588886   66600 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:14.588985   66600 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:14.589089   66600 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:14.589179   66600 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:14.589268   66600 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:14.589362   66600 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:14.589472   66600 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:14.589575   66600 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:14.589638   66600 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:14.589739   66600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:14.902456   66600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:15.107236   66600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:15.198073   66600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:15.618175   66600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:15.804761   66600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:15.805675   66600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:15.809860   66600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:15.811538   66600 out.go:235]   - Booting up control plane ...
	I1028 18:34:15.811658   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:15.811761   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:15.812969   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:15.838182   66600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:15.846044   66600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:15.846126   66600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:15.981748   66600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:15.981899   66600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:16.483112   66600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262752ms
	I1028 18:34:16.483242   66600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:21.484655   66600 kubeadm.go:310] [api-check] The API server is healthy after 5.001327308s
	I1028 18:34:21.498067   66600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:21.508713   66600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:21.537520   66600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:21.537724   66600 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-021370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:21.551416   66600 kubeadm.go:310] [bootstrap-token] Using token: c2otm2.eh2uwearn2r38epe
	I1028 18:34:21.552613   66600 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:21.552721   66600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:21.556871   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:21.563570   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:21.566336   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:21.569226   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:21.575090   66600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:21.890874   66600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:22.315363   66600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:22.892050   66600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:22.892097   66600 kubeadm.go:310] 
	I1028 18:34:22.892198   66600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:22.892214   66600 kubeadm.go:310] 
	I1028 18:34:22.892297   66600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:22.892308   66600 kubeadm.go:310] 
	I1028 18:34:22.892346   66600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:22.892457   66600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:22.892549   66600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:22.892559   66600 kubeadm.go:310] 
	I1028 18:34:22.892628   66600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:22.892643   66600 kubeadm.go:310] 
	I1028 18:34:22.892705   66600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:22.892715   66600 kubeadm.go:310] 
	I1028 18:34:22.892784   66600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:22.892851   66600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:22.892958   66600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:22.892981   66600 kubeadm.go:310] 
	I1028 18:34:22.893093   66600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:22.893197   66600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:22.893212   66600 kubeadm.go:310] 
	I1028 18:34:22.893320   66600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893460   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:22.893506   66600 kubeadm.go:310] 	--control-plane 
	I1028 18:34:22.893515   66600 kubeadm.go:310] 
	I1028 18:34:22.893622   66600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:22.893631   66600 kubeadm.go:310] 
	I1028 18:34:22.893728   66600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893886   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:22.894813   66600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:22.895022   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:34:22.895037   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:22.897376   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:22.898532   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:22.909363   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:22.930151   66600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:22.930190   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:22.930280   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-021370 minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=embed-certs-021370 minikube.k8s.io/primary=true
	I1028 18:34:22.963249   66600 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:23.216574   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:23.717592   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.217674   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.717602   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.216832   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.717673   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.217668   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.716727   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.217476   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.343171   66600 kubeadm.go:1113] duration metric: took 4.413029537s to wait for elevateKubeSystemPrivileges
	I1028 18:34:27.343201   66600 kubeadm.go:394] duration metric: took 5m1.603783417s to StartCluster
	I1028 18:34:27.343221   66600 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.343302   66600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:27.344913   66600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.345149   66600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:27.345210   66600 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:27.345282   66600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-021370"
	I1028 18:34:27.345297   66600 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-021370"
	W1028 18:34:27.345304   66600 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:27.345310   66600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-021370"
	I1028 18:34:27.345339   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345337   66600 addons.go:69] Setting metrics-server=true in profile "embed-certs-021370"
	I1028 18:34:27.345353   66600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-021370"
	I1028 18:34:27.345360   66600 addons.go:234] Setting addon metrics-server=true in "embed-certs-021370"
	W1028 18:34:27.345369   66600 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:27.345381   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:27.345396   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345742   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345788   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345794   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345798   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.346770   66600 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:27.348169   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:27.361310   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1028 18:34:27.361763   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362073   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 18:34:27.362257   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.362292   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.362550   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362640   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363049   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.363079   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.363204   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.363242   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.363425   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363610   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.363934   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1028 18:34:27.364390   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.364865   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.364885   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.365229   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.365805   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.365852   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.367292   66600 addons.go:234] Setting addon default-storageclass=true in "embed-certs-021370"
	W1028 18:34:27.367314   66600 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:27.367347   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.367738   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.367782   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.381375   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1028 18:34:27.381846   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.382429   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.382441   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.382787   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.382926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.382965   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I1028 18:34:27.383568   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.384121   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.384134   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.384530   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.384730   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.384815   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386107   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I1028 18:34:27.386306   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386435   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.386888   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.386911   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.386977   66600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:27.387284   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.387866   66600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:27.387883   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.388259   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.388628   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:27.388645   66600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:27.388658   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.390614   66600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.390634   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:27.390650   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.393252   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393734   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.393758   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.394122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.394238   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.394364   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.394640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395084   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.395110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.395383   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.395540   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.395677   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.406551   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1028 18:34:27.406907   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.407358   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.407376   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.407699   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.407891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.409287   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.409489   66600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.409502   66600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:27.409517   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.412275   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412828   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.412858   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412984   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.413162   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.413303   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.413453   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.546891   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:27.571837   66600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595105   66600 node_ready.go:49] node "embed-certs-021370" has status "Ready":"True"
	I1028 18:34:27.595127   66600 node_ready.go:38] duration metric: took 23.255834ms for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595156   66600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:27.603107   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:27.635422   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.657051   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.666085   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:27.666110   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:27.706366   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:27.706394   66600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:27.772162   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:27.772191   66600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:27.844116   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:28.411454   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411478   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411522   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411544   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411751   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.411960   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.411982   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.411991   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411998   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.412223   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.412266   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413310   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413326   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413338   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.413344   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.413569   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413584   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.420867   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.420891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.421092   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.421168   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.421169   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957337   66600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.11317187s)
	I1028 18:34:28.957385   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957395   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957696   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957715   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957725   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957733   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957957   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957970   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957988   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957990   66600 addons.go:475] Verifying addon metrics-server=true in "embed-certs-021370"
	I1028 18:34:28.959590   66600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:28.961127   66600 addons.go:510] duration metric: took 1.615922156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:29.611126   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:32.110577   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:34.610544   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:37.111319   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.111342   66600 pod_ready.go:82] duration metric: took 9.508204126s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.111351   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119547   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.119571   66600 pod_ready.go:82] duration metric: took 8.212577ms for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119581   66600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126030   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.126048   66600 pod_ready.go:82] duration metric: took 6.46043ms for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126056   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132366   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.132386   66600 pod_ready.go:82] duration metric: took 6.323715ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132394   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137151   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.137171   66600 pod_ready.go:82] duration metric: took 4.770272ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137182   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507159   66600 pod_ready.go:93] pod "kube-proxy-nrr6g" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.507180   66600 pod_ready.go:82] duration metric: took 369.991591ms for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507189   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908006   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.908030   66600 pod_ready.go:82] duration metric: took 400.834669ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908038   66600 pod_ready.go:39] duration metric: took 10.312872321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:37.908052   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:37.908098   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:37.924515   66600 api_server.go:72] duration metric: took 10.579335154s to wait for apiserver process to appear ...
	I1028 18:34:37.924552   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:37.924572   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:34:37.929438   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:34:37.930716   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:37.930742   66600 api_server.go:131] duration metric: took 6.181503ms to wait for apiserver health ...
	I1028 18:34:37.930752   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:38.113401   66600 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:38.113430   66600 system_pods.go:61] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.113435   66600 system_pods.go:61] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.113439   66600 system_pods.go:61] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.113442   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.113446   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.113449   66600 system_pods.go:61] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.113452   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.113457   66600 system_pods.go:61] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.113462   66600 system_pods.go:61] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.113468   66600 system_pods.go:74] duration metric: took 182.711396ms to wait for pod list to return data ...
	I1028 18:34:38.113475   66600 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:38.309139   66600 default_sa.go:45] found service account: "default"
	I1028 18:34:38.309170   66600 default_sa.go:55] duration metric: took 195.688587ms for default service account to be created ...
	I1028 18:34:38.309182   66600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:38.510307   66600 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:38.510336   66600 system_pods.go:89] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.510341   66600 system_pods.go:89] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.510345   66600 system_pods.go:89] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.510349   66600 system_pods.go:89] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.510352   66600 system_pods.go:89] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.510355   66600 system_pods.go:89] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.510360   66600 system_pods.go:89] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.510368   66600 system_pods.go:89] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.510376   66600 system_pods.go:89] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.510391   66600 system_pods.go:126] duration metric: took 201.199416ms to wait for k8s-apps to be running ...
	I1028 18:34:38.510403   66600 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:38.510448   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:38.526043   66600 system_svc.go:56] duration metric: took 15.628796ms WaitForService to wait for kubelet
	I1028 18:34:38.526075   66600 kubeadm.go:582] duration metric: took 11.18089878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:38.526109   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:38.707568   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:38.707594   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:38.707604   66600 node_conditions.go:105] duration metric: took 181.491056ms to run NodePressure ...
	I1028 18:34:38.707615   66600 start.go:241] waiting for startup goroutines ...
	I1028 18:34:38.707621   66600 start.go:246] waiting for cluster config update ...
	I1028 18:34:38.707631   66600 start.go:255] writing updated cluster config ...
	I1028 18:34:38.707950   66600 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:38.755355   66600 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:38.757256   66600 out.go:177] * Done! kubectl is now configured to use "embed-certs-021370" cluster and "default" namespace by default
	I1028 18:34:49.381931   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:34:49.382111   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:34:49.383570   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:34:49.383633   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:49.383732   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:49.383859   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:49.383975   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:34:49.384073   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:49.385654   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:49.385757   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:49.385847   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:49.385937   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:49.386008   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:49.386118   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:49.386214   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:49.386316   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:49.386391   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:49.386478   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:49.386597   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:49.386643   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:49.386724   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:49.386813   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:49.386891   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:49.386983   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:49.387070   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:49.387209   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:49.387330   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:49.387389   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:49.387474   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:49.389653   67149 out.go:235]   - Booting up control plane ...
	I1028 18:34:49.389760   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:49.389867   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:49.389971   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:49.390088   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:49.390228   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:34:49.390277   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:34:49.390355   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390550   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390645   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390832   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390903   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391069   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391163   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391354   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391452   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391649   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391657   67149 kubeadm.go:310] 
	I1028 18:34:49.391691   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:34:49.391743   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:34:49.391758   67149 kubeadm.go:310] 
	I1028 18:34:49.391789   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:34:49.391822   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:34:49.391908   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:34:49.391914   67149 kubeadm.go:310] 
	I1028 18:34:49.392024   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:34:49.392073   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:34:49.392133   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:34:49.392142   67149 kubeadm.go:310] 
	I1028 18:34:49.392267   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:34:49.392363   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:34:49.392380   67149 kubeadm.go:310] 
	I1028 18:34:49.392525   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:34:49.392629   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:34:49.392737   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:34:49.392830   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:34:49.392879   67149 kubeadm.go:310] 
	W1028 18:34:49.392949   67149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:34:49.392991   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:34:49.869859   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:49.884524   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:49.896293   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:49.896318   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:49.896354   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:49.907312   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:49.907364   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:49.917926   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:49.928001   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:49.928048   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:49.938687   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.949217   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:49.949268   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.959955   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:49.970105   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:49.970156   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:49.980760   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:50.212973   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:36:46.686631   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:36:46.686753   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:36:46.688224   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:36:46.688325   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:36:46.688449   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:36:46.688587   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:36:46.688726   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:36:46.688813   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:36:46.690320   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:36:46.690427   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:36:46.690524   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:36:46.690627   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:36:46.690720   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:36:46.690824   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:36:46.690897   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:36:46.690984   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:36:46.691064   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:36:46.691161   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:36:46.691253   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:36:46.691309   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:36:46.691379   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:36:46.691426   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:36:46.691471   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:36:46.691547   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:36:46.691619   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:36:46.691713   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:36:46.691814   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:36:46.691864   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:36:46.691951   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:36:46.693258   67149 out.go:235]   - Booting up control plane ...
	I1028 18:36:46.693374   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:36:46.693471   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:36:46.693566   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:36:46.693682   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:36:46.693870   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:36:46.693930   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:36:46.694023   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694253   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694343   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694527   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694614   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694798   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694894   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695053   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695119   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695315   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695324   67149 kubeadm.go:310] 
	I1028 18:36:46.695357   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:36:46.695392   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:36:46.695398   67149 kubeadm.go:310] 
	I1028 18:36:46.695427   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:36:46.695456   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:36:46.695542   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:36:46.695549   67149 kubeadm.go:310] 
	I1028 18:36:46.695665   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:36:46.695717   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:36:46.695767   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:36:46.695781   67149 kubeadm.go:310] 
	I1028 18:36:46.695921   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:36:46.696037   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:36:46.696048   67149 kubeadm.go:310] 
	I1028 18:36:46.696177   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:36:46.696285   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:36:46.696390   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:36:46.696512   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:36:46.696560   67149 kubeadm.go:310] 
	I1028 18:36:46.696579   67149 kubeadm.go:394] duration metric: took 7m56.601380499s to StartCluster
	I1028 18:36:46.696618   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:36:46.696670   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:36:46.738714   67149 cri.go:89] found id: ""
	I1028 18:36:46.738741   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.738749   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:36:46.738757   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:36:46.738822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:36:46.772906   67149 cri.go:89] found id: ""
	I1028 18:36:46.772934   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.772944   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:36:46.772951   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:36:46.773028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:36:46.808785   67149 cri.go:89] found id: ""
	I1028 18:36:46.808809   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.808819   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:36:46.808827   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:36:46.808884   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:36:46.842977   67149 cri.go:89] found id: ""
	I1028 18:36:46.843007   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.843016   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:36:46.843022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:36:46.843095   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:36:46.878121   67149 cri.go:89] found id: ""
	I1028 18:36:46.878148   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.878159   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:36:46.878166   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:36:46.878231   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:36:46.911953   67149 cri.go:89] found id: ""
	I1028 18:36:46.911977   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.911984   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:36:46.911990   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:36:46.912054   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:36:46.944291   67149 cri.go:89] found id: ""
	I1028 18:36:46.944317   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.944324   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:36:46.944329   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:36:46.944379   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:36:46.976525   67149 cri.go:89] found id: ""
	I1028 18:36:46.976554   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.976564   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:36:46.976575   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:36:46.976588   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:36:47.026517   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:36:47.026544   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:36:47.041198   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:36:47.041231   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:36:47.115650   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:36:47.115681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:36:47.115695   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:36:47.218059   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:36:47.218093   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 18:36:47.257114   67149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:36:47.257182   67149 out.go:270] * 
	W1028 18:36:47.257240   67149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.257280   67149 out.go:270] * 
	W1028 18:36:47.258088   67149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:36:47.261521   67149 out.go:201] 
	W1028 18:36:47.262707   67149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.262742   67149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:36:47.262760   67149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:36:47.264073   67149 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.572488679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141020572463504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa902f78-5924-42fb-a5e7-da3ffc639fe6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.573669022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b5b6e30-64d2-4457-a302-1d9b940a0640 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.573751872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b5b6e30-64d2-4457-a302-1d9b940a0640 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.574351602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b5b6e30-64d2-4457-a302-1d9b940a0640 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.616846666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c778fc08-d72e-4f8f-baf2-fe795830d935 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.616917387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c778fc08-d72e-4f8f-baf2-fe795830d935 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.618194576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba4f8d02-9d88-47de-8f2d-6a8655770bf0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.618623827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141020618540065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba4f8d02-9d88-47de-8f2d-6a8655770bf0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.619162898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1801dbe1-0416-4d60-89ee-3477a4026c74 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.619236642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1801dbe1-0416-4d60-89ee-3477a4026c74 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.619426571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1801dbe1-0416-4d60-89ee-3477a4026c74 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.666313239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b23bde1-3ee9-4592-8f2a-408b2c91ae0d name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.666381747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b23bde1-3ee9-4592-8f2a-408b2c91ae0d name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.667976319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8372cea6-5c6f-43e4-b398-407c833074a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.668362567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141020668342265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8372cea6-5c6f-43e4-b398-407c833074a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.669050341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5d53a2a-bf02-4f12-abd7-ace88ba23792 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.669103359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5d53a2a-bf02-4f12-abd7-ace88ba23792 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.669301822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5d53a2a-bf02-4f12-abd7-ace88ba23792 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.711140791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b9d0c11-f383-46c1-9706-f009735ea26b name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.711253444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b9d0c11-f383-46c1-9706-f009735ea26b name=/runtime.v1.RuntimeService/Version
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.712522211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cabe5c4e-3f9d-47ea-9955-e10616458be4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.713160993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141020713137941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cabe5c4e-3f9d-47ea-9955-e10616458be4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.713842242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a87b3c2-af21-4f3c-a0a1-c91d39c55170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.713892751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a87b3c2-af21-4f3c-a0a1-c91d39c55170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:43:40 embed-certs-021370 crio[708]: time="2024-10-28 18:43:40.714095441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a87b3c2-af21-4f3c-a0a1-c91d39c55170 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	006aada4f30c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   413e67bcd3b89       storage-provisioner
	104264eddd009       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   7fbafc94d70e0       coredns-7c65d6cfc9-qw5gl
	71661974f7b38       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   787d8ca532edb       coredns-7c65d6cfc9-d5pk8
	ee22f5ea76449       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   b0f9161c9eb29       kube-proxy-nrr6g
	923e774fae799       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   46f3ff135b802       kube-scheduler-embed-certs-021370
	84f43ce11e608       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   977cc130e7084       etcd-embed-certs-021370
	3f031e8707fea       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   58b3ae9b9ad90       kube-controller-manager-embed-certs-021370
	d269f62b266bb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   379268ef9c69e       kube-apiserver-embed-certs-021370
	f7431ff218449       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   41009c46e8497       kube-apiserver-embed-certs-021370
	
	
	==> coredns [104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-021370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-021370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=embed-certs-021370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-021370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:43:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:39:38 +0000   Mon, 28 Oct 2024 18:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:39:38 +0000   Mon, 28 Oct 2024 18:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:39:38 +0000   Mon, 28 Oct 2024 18:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:39:38 +0000   Mon, 28 Oct 2024 18:34:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.62
	  Hostname:    embed-certs-021370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e43992f2590c4869aa99fe323aa72fba
	  System UUID:                e43992f2-590c-4869-aa99-fe323aa72fba
	  Boot ID:                    e1a99776-ff86-4bdc-98df-70ca9124588c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-d5pk8                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-qw5gl                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-embed-certs-021370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-embed-certs-021370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-021370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-nrr6g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-embed-certs-021370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-hpwrm               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node embed-certs-021370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node embed-certs-021370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node embed-certs-021370 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s  node-controller  Node embed-certs-021370 event: Registered Node embed-certs-021370 in Controller
	
	
	==> dmesg <==
	[  +0.051294] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040972] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.136975] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.498753] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.648113] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.698016] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.077008] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056632] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.182596] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.148543] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.300354] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.011500] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.252435] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.071622] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.559869] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.959680] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 18:34] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.641044] systemd-fstab-generator[2622]: Ignoring "noauto" option for root device
	[  +4.527852] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.518295] systemd-fstab-generator[2942]: Ignoring "noauto" option for root device
	[  +5.484211] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +0.107812] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.540848] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f] <==
	{"level":"info","ts":"2024-10-28T18:34:17.425113Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T18:34:17.425193Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-10-28T18:34:17.425351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-10-28T18:34:17.428447Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"48d332b29d0cdf97","initial-advertise-peer-urls":["https://192.168.50.62:2380"],"listen-peer-urls":["https://192.168.50.62:2380"],"advertise-client-urls":["https://192.168.50.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T18:34:17.428521Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T18:34:17.584624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T18:34:17.584679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T18:34:17.584694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 received MsgPreVoteResp from 48d332b29d0cdf97 at term 1"}
	{"level":"info","ts":"2024-10-28T18:34:17.584705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T18:34:17.584710Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 received MsgVoteResp from 48d332b29d0cdf97 at term 2"}
	{"level":"info","ts":"2024-10-28T18:34:17.584719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became leader at term 2"}
	{"level":"info","ts":"2024-10-28T18:34:17.584732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 48d332b29d0cdf97 elected leader 48d332b29d0cdf97 at term 2"}
	{"level":"info","ts":"2024-10-28T18:34:17.588784Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"48d332b29d0cdf97","local-member-attributes":"{Name:embed-certs-021370 ClientURLs:[https://192.168.50.62:2379]}","request-path":"/0/members/48d332b29d0cdf97/attributes","cluster-id":"4f4301e400b1ef13","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T18:34:17.588829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:34:17.589144Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:34:17.589372Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:34:17.589609Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:34:17.589639Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T18:34:17.590261Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:34:17.598257Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.62:2379"}
	{"level":"info","ts":"2024-10-28T18:34:17.598828Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:34:17.598901Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:34:17.599342Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:34:17.600041Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T18:34:17.600345Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:43:41 up 14 min,  0 users,  load average: 0.00, 0.10, 0.11
	Linux embed-certs-021370 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49] <==
	E1028 18:39:20.680504       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 18:39:20.680661       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:39:20.681813       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:39:20.681891       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:40:20.682544       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:40:20.682677       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:40:20.682793       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 18:40:20.682906       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:40:20.684107       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:40:20.684176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:42:20.684836       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:42:20.685124       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:42:20.685322       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1028 18:42:20.685324       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 18:42:20.686689       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:42:20.686768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46] <==
	W1028 18:34:09.248068       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.249537       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.263327       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.338448       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.343944       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.355980       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.405406       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.459860       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.464309       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.479794       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.494416       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.531116       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.736328       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.759522       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.788096       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.837471       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.137101       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.221676       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.231215       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.247770       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.307447       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.320449       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.326083       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.518661       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:12.441848       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da] <==
	E1028 18:38:26.654406       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:38:27.086228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:38:56.661834       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:38:57.094781       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:39:26.669239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:39:27.103803       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:39:38.275998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-021370"
	E1028 18:39:56.676315       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:39:57.111991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:40:26.682071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:40:27.119994       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:40:34.217867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="220.415µs"
	I1028 18:40:46.216518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.255µs"
	E1028 18:40:56.696546       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:40:57.128484       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:41:26.704085       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:41:27.138226       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:41:56.710966       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:41:57.146266       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:42:26.717332       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:42:27.154675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:42:56.725145       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:42:57.162500       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:43:26.732278       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:43:27.170103       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:34:29.300499       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:34:29.380768       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.62"]
	E1028 18:34:29.419670       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:34:29.525141       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:34:29.525218       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:34:29.525329       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:34:29.529934       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:34:29.530152       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:34:29.530316       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:34:29.533020       1 config.go:199] "Starting service config controller"
	I1028 18:34:29.536722       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:34:29.534916       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:34:29.536795       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:34:29.535461       1 config.go:328] "Starting node config controller"
	I1028 18:34:29.536829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:34:29.637253       1 shared_informer.go:320] Caches are synced for node config
	I1028 18:34:29.637339       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:34:29.637384       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41] <==
	E1028 18:34:19.708190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:19.707376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:34:19.708247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:19.706864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:34:19.708264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1028 18:34:19.708093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.546747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:34:20.546782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.550278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 18:34:20.550304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.566598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 18:34:20.566683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.642152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:34:20.642390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.643497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 18:34:20.643548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.732873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 18:34:20.733140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.733942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 18:34:20.734135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.786549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 18:34:20.786640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:21.063446       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 18:34:21.063496       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1028 18:34:23.196700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 18:42:30 embed-certs-021370 kubelet[2949]: E1028 18:42:30.202126    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:42:32 embed-certs-021370 kubelet[2949]: E1028 18:42:32.373339    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140952373004045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:32 embed-certs-021370 kubelet[2949]: E1028 18:42:32.373867    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140952373004045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:42 embed-certs-021370 kubelet[2949]: E1028 18:42:42.375849    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140962375499572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:42 embed-certs-021370 kubelet[2949]: E1028 18:42:42.376175    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140962375499572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:43 embed-certs-021370 kubelet[2949]: E1028 18:42:43.201238    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:42:52 embed-certs-021370 kubelet[2949]: E1028 18:42:52.378766    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140972378140003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:52 embed-certs-021370 kubelet[2949]: E1028 18:42:52.379055    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140972378140003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:42:54 embed-certs-021370 kubelet[2949]: E1028 18:42:54.200801    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:43:02 embed-certs-021370 kubelet[2949]: E1028 18:43:02.381094    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140982380810697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:02 embed-certs-021370 kubelet[2949]: E1028 18:43:02.381371    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140982380810697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:08 embed-certs-021370 kubelet[2949]: E1028 18:43:08.202449    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:43:12 embed-certs-021370 kubelet[2949]: E1028 18:43:12.383352    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140992383020680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:12 embed-certs-021370 kubelet[2949]: E1028 18:43:12.384294    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730140992383020680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:19 embed-certs-021370 kubelet[2949]: E1028 18:43:19.201501    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:43:22 embed-certs-021370 kubelet[2949]: E1028 18:43:22.223696    2949 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 18:43:22 embed-certs-021370 kubelet[2949]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 18:43:22 embed-certs-021370 kubelet[2949]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 18:43:22 embed-certs-021370 kubelet[2949]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 18:43:22 embed-certs-021370 kubelet[2949]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 18:43:22 embed-certs-021370 kubelet[2949]: E1028 18:43:22.385665    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141002385276734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:22 embed-certs-021370 kubelet[2949]: E1028 18:43:22.385808    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141002385276734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:32 embed-certs-021370 kubelet[2949]: E1028 18:43:32.200742    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:43:32 embed-certs-021370 kubelet[2949]: E1028 18:43:32.388203    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141012387684145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:43:32 embed-certs-021370 kubelet[2949]: E1028 18:43:32.388250    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141012387684145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92] <==
	I1028 18:34:29.298734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 18:34:29.345011       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 18:34:29.345100       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 18:34:29.449061       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 18:34:29.452467       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-021370_58a5ed82-70b3-4caf-82ff-0532950f2f11!
	I1028 18:34:29.467770       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f36b46b-aaf8-4653-8eec-b712cce1fd67", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-021370_58a5ed82-70b3-4caf-82ff-0532950f2f11 became leader
	I1028 18:34:29.553845       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-021370_58a5ed82-70b3-4caf-82ff-0532950f2f11!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021370 -n embed-certs-021370
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-021370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hpwrm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-021370 describe pod metrics-server-6867b74b74-hpwrm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-021370 describe pod metrics-server-6867b74b74-hpwrm: exit status 1 (61.183979ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hpwrm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-021370 describe pod metrics-server-6867b74b74-hpwrm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
E1028 18:38:38.395002   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
E1028 18:40:33.436007   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
E1028 18:43:36.515313   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
E1028 18:43:38.395348   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
E1028 18:45:33.435491   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (220.857936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-223868" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (216.741829ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-223868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-223868 logs -n 25: (1.455009221s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-703793                              | running-upgrade-703793       | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-021370            | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:25:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:25:35.146308   67489 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:25:35.146467   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146474   67489 out.go:358] Setting ErrFile to fd 2...
	I1028 18:25:35.146480   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146973   67489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:25:35.147825   67489 out.go:352] Setting JSON to false
	I1028 18:25:35.148718   67489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7678,"bootTime":1730132257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:25:35.148810   67489 start.go:139] virtualization: kvm guest
	I1028 18:25:35.150695   67489 out.go:177] * [default-k8s-diff-port-692033] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:25:35.151797   67489 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:25:35.151797   67489 notify.go:220] Checking for updates...
	I1028 18:25:35.154193   67489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:25:35.155491   67489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:25:35.156576   67489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:25:35.157619   67489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:25:35.158702   67489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:25:35.160202   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:25:35.160602   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.160658   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.175095   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I1028 18:25:35.175421   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.175848   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.175863   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.176187   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.176387   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.176667   67489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:25:35.177210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.177325   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.191270   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I1028 18:25:35.191687   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.192092   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.192114   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.192388   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.192551   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.222738   67489 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:25:35.223900   67489 start.go:297] selected driver: kvm2
	I1028 18:25:35.223910   67489 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.224018   67489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:25:35.224696   67489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.224770   67489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:25:35.238839   67489 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:25:35.239228   67489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:25:35.239258   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:25:35.239310   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:25:35.239360   67489 start.go:340] cluster config:
	{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.239480   67489 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.241175   67489 out.go:177] * Starting "default-k8s-diff-port-692033" primary control-plane node in "default-k8s-diff-port-692033" cluster
	I1028 18:25:37.248702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:35.242393   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:25:35.242423   67489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:25:35.242432   67489 cache.go:56] Caching tarball of preloaded images
	I1028 18:25:35.242504   67489 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:25:35.242517   67489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:25:35.242600   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:25:35.242763   67489 start.go:360] acquireMachinesLock for default-k8s-diff-port-692033: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:25:40.320712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:46.400713   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:49.472709   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:55.552712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:58.624703   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:04.704707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:07.776740   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:13.856735   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:16.928744   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:23.008721   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:26.080668   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:32.160706   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:35.232663   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:41.312774   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:44.384739   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:50.464729   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:53.536702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:59.616750   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:02.688719   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:08.768731   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:11.840771   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:17.920756   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:20.992753   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:27.072785   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:30.144726   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:36.224704   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:39.296825   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:45.376692   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:48.448699   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:54.528707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:57.600754   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:28:00.605468   66801 start.go:364] duration metric: took 4m12.368996576s to acquireMachinesLock for "no-preload-051152"
	I1028 18:28:00.605517   66801 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:00.605525   66801 fix.go:54] fixHost starting: 
	I1028 18:28:00.605815   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:00.605850   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:00.621828   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1028 18:28:00.622237   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:00.622654   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:28:00.622674   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:00.622975   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:00.623150   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:00.623272   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:28:00.624880   66801 fix.go:112] recreateIfNeeded on no-preload-051152: state=Stopped err=<nil>
	I1028 18:28:00.624910   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	W1028 18:28:00.625076   66801 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:00.627065   66801 out.go:177] * Restarting existing kvm2 VM for "no-preload-051152" ...
	I1028 18:28:00.603089   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:00.603122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603425   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:28:00.603450   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603663   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:28:00.605343   66600 machine.go:96] duration metric: took 4m37.432159141s to provisionDockerMachine
	I1028 18:28:00.605380   66600 fix.go:56] duration metric: took 4m37.452432846s for fixHost
	I1028 18:28:00.605387   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 4m37.452449736s
	W1028 18:28:00.605419   66600 start.go:714] error starting host: provision: host is not running
	W1028 18:28:00.605517   66600 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 18:28:00.605528   66600 start.go:729] Will try again in 5 seconds ...
	I1028 18:28:00.628172   66801 main.go:141] libmachine: (no-preload-051152) Calling .Start
	I1028 18:28:00.628308   66801 main.go:141] libmachine: (no-preload-051152) Ensuring networks are active...
	I1028 18:28:00.629123   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network default is active
	I1028 18:28:00.629467   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network mk-no-preload-051152 is active
	I1028 18:28:00.629782   66801 main.go:141] libmachine: (no-preload-051152) Getting domain xml...
	I1028 18:28:00.630687   66801 main.go:141] libmachine: (no-preload-051152) Creating domain...
	I1028 18:28:01.819872   66801 main.go:141] libmachine: (no-preload-051152) Waiting to get IP...
	I1028 18:28:01.820792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:01.821214   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:01.821287   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:01.821204   68016 retry.go:31] will retry after 269.081621ms: waiting for machine to come up
	I1028 18:28:02.091799   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.092220   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.092242   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.092175   68016 retry.go:31] will retry after 341.926163ms: waiting for machine to come up
	I1028 18:28:02.435679   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.436035   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.436067   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.435982   68016 retry.go:31] will retry after 355.739166ms: waiting for machine to come up
	I1028 18:28:02.793549   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.793928   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.793953   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.793881   68016 retry.go:31] will retry after 496.396184ms: waiting for machine to come up
	I1028 18:28:05.607678   66600 start.go:360] acquireMachinesLock for embed-certs-021370: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:28:03.291568   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.292038   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.292068   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.291978   68016 retry.go:31] will retry after 561.311245ms: waiting for machine to come up
	I1028 18:28:03.854782   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.855137   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.855166   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.855088   68016 retry.go:31] will retry after 574.675969ms: waiting for machine to come up
	I1028 18:28:04.431784   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:04.432226   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:04.432250   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:04.432177   68016 retry.go:31] will retry after 1.028136295s: waiting for machine to come up
	I1028 18:28:05.461477   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:05.461839   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:05.461869   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:05.461795   68016 retry.go:31] will retry after 955.343831ms: waiting for machine to come up
	I1028 18:28:06.418161   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:06.418629   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:06.418659   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:06.418576   68016 retry.go:31] will retry after 1.615930502s: waiting for machine to come up
	I1028 18:28:08.036275   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:08.036641   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:08.036662   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:08.036615   68016 retry.go:31] will retry after 2.111463198s: waiting for machine to come up
	I1028 18:28:10.150891   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:10.151403   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:10.151429   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:10.151351   68016 retry.go:31] will retry after 2.35232289s: waiting for machine to come up
	I1028 18:28:12.506070   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:12.506471   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:12.506494   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:12.506447   68016 retry.go:31] will retry after 2.874687772s: waiting for machine to come up
	I1028 18:28:15.384360   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:15.384680   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:15.384712   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:15.384636   68016 retry.go:31] will retry after 3.299950406s: waiting for machine to come up
	I1028 18:28:19.893083   67149 start.go:364] duration metric: took 3m43.747535803s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:28:19.893161   67149 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:19.893170   67149 fix.go:54] fixHost starting: 
	I1028 18:28:19.893556   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:19.893608   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:19.909857   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I1028 18:28:19.910215   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:19.910669   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:28:19.910690   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:19.911049   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:19.911241   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:19.911395   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:28:19.912825   67149 fix.go:112] recreateIfNeeded on old-k8s-version-223868: state=Stopped err=<nil>
	I1028 18:28:19.912856   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	W1028 18:28:19.912996   67149 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:19.915041   67149 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	I1028 18:28:19.916422   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .Start
	I1028 18:28:19.916611   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:28:19.917295   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:28:19.917560   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:28:19.917951   67149 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:28:19.918628   67149 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:28:18.688243   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.688710   66801 main.go:141] libmachine: (no-preload-051152) Found IP for machine: 192.168.61.78
	I1028 18:28:18.688738   66801 main.go:141] libmachine: (no-preload-051152) Reserving static IP address...
	I1028 18:28:18.688754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has current primary IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.689151   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.689174   66801 main.go:141] libmachine: (no-preload-051152) Reserved static IP address: 192.168.61.78
	I1028 18:28:18.689188   66801 main.go:141] libmachine: (no-preload-051152) DBG | skip adding static IP to network mk-no-preload-051152 - found existing host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"}
	I1028 18:28:18.689198   66801 main.go:141] libmachine: (no-preload-051152) Waiting for SSH to be available...
	I1028 18:28:18.689217   66801 main.go:141] libmachine: (no-preload-051152) DBG | Getting to WaitForSSH function...
	I1028 18:28:18.691372   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691721   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.691754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691861   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH client type: external
	I1028 18:28:18.691890   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa (-rw-------)
	I1028 18:28:18.691950   66801 main.go:141] libmachine: (no-preload-051152) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:18.691967   66801 main.go:141] libmachine: (no-preload-051152) DBG | About to run SSH command:
	I1028 18:28:18.691979   66801 main.go:141] libmachine: (no-preload-051152) DBG | exit 0
	I1028 18:28:18.816169   66801 main.go:141] libmachine: (no-preload-051152) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:18.816571   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetConfigRaw
	I1028 18:28:18.817209   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:18.819569   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.819891   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.819913   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.820164   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:28:18.820375   66801 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:18.820392   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:18.820618   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.822580   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.822953   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.822983   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.823096   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.823250   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823390   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823537   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.823687   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.823878   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.823890   66801 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:18.932489   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:18.932516   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.932769   66801 buildroot.go:166] provisioning hostname "no-preload-051152"
	I1028 18:28:18.932798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.933003   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.935565   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.935938   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.935965   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.936147   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.936346   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936513   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936674   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.936838   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.936994   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.937006   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-051152 && echo "no-preload-051152" | sudo tee /etc/hostname
	I1028 18:28:19.057840   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-051152
	
	I1028 18:28:19.057872   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.060536   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.060917   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.060946   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.061068   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.061237   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061405   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061544   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.061700   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.061848   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.061863   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-051152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-051152/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-051152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:19.180890   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:19.180920   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:19.180957   66801 buildroot.go:174] setting up certificates
	I1028 18:28:19.180971   66801 provision.go:84] configureAuth start
	I1028 18:28:19.180985   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:19.181299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.183792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184144   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.184172   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184309   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.186298   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186588   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.186616   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186722   66801 provision.go:143] copyHostCerts
	I1028 18:28:19.186790   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:19.186804   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:19.186868   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:19.186974   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:19.186986   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:19.187023   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:19.187107   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:19.187115   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:19.187146   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:19.187197   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.no-preload-051152 san=[127.0.0.1 192.168.61.78 localhost minikube no-preload-051152]
	I1028 18:28:19.275109   66801 provision.go:177] copyRemoteCerts
	I1028 18:28:19.275175   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:19.275200   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.278392   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.278946   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.278978   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.279183   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.279454   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.279651   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.279789   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.362094   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:19.384635   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:28:19.406649   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:19.428807   66801 provision.go:87] duration metric: took 247.825267ms to configureAuth
	I1028 18:28:19.428830   66801 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:19.429026   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:28:19.429090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.431615   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.431928   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.431954   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.432090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.432278   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432434   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432602   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.432786   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.432932   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.432946   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:19.655137   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:19.655163   66801 machine.go:96] duration metric: took 834.775161ms to provisionDockerMachine
	I1028 18:28:19.655175   66801 start.go:293] postStartSetup for "no-preload-051152" (driver="kvm2")
	I1028 18:28:19.655185   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:19.655199   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.655509   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:19.655532   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.658099   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658411   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.658442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658566   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.658744   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.658884   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.659013   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.743030   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:19.746986   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:19.747007   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:19.747081   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:19.747177   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:19.747290   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:19.756378   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:19.779243   66801 start.go:296] duration metric: took 124.056855ms for postStartSetup
	I1028 18:28:19.779283   66801 fix.go:56] duration metric: took 19.173756385s for fixHost
	I1028 18:28:19.779305   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.781887   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782205   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.782226   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782367   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.782557   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782709   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782836   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.782999   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.783180   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.783191   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:19.892920   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140099.866892804
	
	I1028 18:28:19.892944   66801 fix.go:216] guest clock: 1730140099.866892804
	I1028 18:28:19.892954   66801 fix.go:229] Guest: 2024-10-28 18:28:19.866892804 +0000 UTC Remote: 2024-10-28 18:28:19.779287594 +0000 UTC m=+271.674302547 (delta=87.60521ms)
	I1028 18:28:19.892997   66801 fix.go:200] guest clock delta is within tolerance: 87.60521ms
	I1028 18:28:19.893008   66801 start.go:83] releasing machines lock for "no-preload-051152", held for 19.287505767s
	I1028 18:28:19.893034   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.893299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.895775   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896177   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.896204   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896362   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.896826   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897023   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897133   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:19.897171   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.897267   66801 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:19.897291   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.899703   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.899995   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900031   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900054   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900208   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900374   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900416   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900550   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.900626   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900707   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.900818   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900944   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.901098   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.982201   66801 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:20.008913   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:20.157816   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:20.165773   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:20.165837   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:20.187342   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:20.187359   66801 start.go:495] detecting cgroup driver to use...
	I1028 18:28:20.187423   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:20.204825   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:20.220702   66801 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:20.220776   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:20.238812   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:20.253664   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:20.363567   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:20.534475   66801 docker.go:233] disabling docker service ...
	I1028 18:28:20.534564   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:20.548424   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:20.564292   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:20.687135   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:20.796225   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:20.810327   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:20.828804   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:28:20.828866   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.838719   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:20.838768   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.849166   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.862811   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.875223   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:20.885402   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.895602   66801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.914163   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.924194   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:20.934907   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:20.934958   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:20.948898   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:20.958955   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:21.069438   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:21.175294   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:21.175379   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:21.179886   66801 start.go:563] Will wait 60s for crictl version
	I1028 18:28:21.179942   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.184195   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:21.226939   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:21.227043   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.254702   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.284607   66801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:28:21.285906   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:21.288560   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.288918   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:21.288945   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.289132   66801 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:21.293108   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:21.307303   66801 kubeadm.go:883] updating cluster {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:21.307447   66801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:28:21.307495   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:21.347493   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:28:21.347520   66801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:21.347595   66801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.347609   66801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.347621   66801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.347656   66801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.347690   66801 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 18:28:21.347691   66801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.347758   66801 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.347695   66801 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349312   66801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.349387   66801 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 18:28:21.349402   66801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.349526   66801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.349574   66801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.349582   66801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.349632   66801 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349311   66801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.515246   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.515760   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.543817   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 18:28:21.551755   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.562433   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.594208   66801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 18:28:21.594257   66801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.594291   66801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 18:28:21.594317   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.594323   66801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.594364   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.666046   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.666654   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.757831   66801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 18:28:21.757867   66801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.757867   66801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 18:28:21.757894   66801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.757914   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757926   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.757937   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757982   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.758142   66801 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 18:28:21.758161   66801 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 18:28:21.758197   66801 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.758169   66801 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.758234   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.758270   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.813746   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.813792   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.813836   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.813837   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.813840   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.813890   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.934434   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.958229   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.958287   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.958377   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.958381   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.958467   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.053179   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 18:28:22.053304   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.053351   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 18:28:22.053447   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:22.087756   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:22.087762   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:22.087826   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:22.087867   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.087897   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 18:28:22.087907   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087938   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087942   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 18:28:22.161136   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 18:28:22.161259   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:22.201924   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 18:28:22.201967   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 18:28:22.202032   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:22.202068   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:21.207941   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:28:21.209066   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.209518   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.209604   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.209495   68155 retry.go:31] will retry after 258.02952ms: waiting for machine to come up
	I1028 18:28:21.468599   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.469034   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.469052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.468996   68155 retry.go:31] will retry after 389.053264ms: waiting for machine to come up
	I1028 18:28:21.859493   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.859987   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.860017   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.859929   68155 retry.go:31] will retry after 454.438888ms: waiting for machine to come up
	I1028 18:28:22.315484   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.315961   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.315988   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.315904   68155 retry.go:31] will retry after 531.549561ms: waiting for machine to come up
	I1028 18:28:22.849247   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.849736   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.849791   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.849693   68155 retry.go:31] will retry after 602.202649ms: waiting for machine to come up
	I1028 18:28:23.453311   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:23.453859   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:23.453887   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:23.453796   68155 retry.go:31] will retry after 836.622626ms: waiting for machine to come up
	I1028 18:28:24.291959   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:24.292286   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:24.292315   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:24.292252   68155 retry.go:31] will retry after 1.187276744s: waiting for machine to come up
	I1028 18:28:25.480962   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:25.481398   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:25.481417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:25.481350   68155 retry.go:31] will retry after 1.417127806s: waiting for machine to come up
	I1028 18:28:23.586400   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.127903   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3: (2.040063682s)
	I1028 18:28:24.127962   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 18:28:24.127967   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (1.966690859s)
	I1028 18:28:24.127991   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 18:28:24.128010   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.925953727s)
	I1028 18:28:24.128034   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.925947261s)
	I1028 18:28:24.128041   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 18:28:24.128048   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 18:28:24.127904   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.03994028s)
	I1028 18:28:24.128069   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:24.128085   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 18:28:24.128109   66801 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 18:28:24.128123   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.128138   66801 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.128166   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:24.128180   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.132734   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 18:28:26.097200   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.9689964s)
	I1028 18:28:26.097240   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 18:28:26.097241   66801 ssh_runner.go:235] Completed: which crictl: (1.969052863s)
	I1028 18:28:26.097264   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.097308   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:26.097309   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.900944   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:26.901481   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:26.901511   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:26.901426   68155 retry.go:31] will retry after 1.766762252s: waiting for machine to come up
	I1028 18:28:28.670334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:28.670798   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:28.670827   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:28.670742   68155 retry.go:31] will retry after 2.287152926s: waiting for machine to come up
	I1028 18:28:30.959639   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:30.959947   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:30.959963   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:30.959917   68155 retry.go:31] will retry after 1.799223833s: waiting for machine to come up
	I1028 18:28:28.165293   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.067952153s)
	I1028 18:28:28.165410   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:28.165497   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.068111312s)
	I1028 18:28:28.165523   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 18:28:28.165548   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.165591   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.208189   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:30.152411   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.986796263s)
	I1028 18:28:30.152458   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 18:28:30.152496   66801 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152504   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.944281988s)
	I1028 18:28:30.152550   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152556   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 18:28:30.152652   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:32.761498   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:32.761941   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:32.761968   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:32.761894   68155 retry.go:31] will retry after 2.231065891s: waiting for machine to come up
	I1028 18:28:34.994438   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:34.994902   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:34.994936   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:34.994847   68155 retry.go:31] will retry after 4.079794439s: waiting for machine to come up
	I1028 18:28:33.842059   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.689484833s)
	I1028 18:28:33.842109   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 18:28:33.842138   66801 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:33.842155   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.68947822s)
	I1028 18:28:33.842184   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 18:28:33.842206   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:35.714458   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.872222439s)
	I1028 18:28:35.714493   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 18:28:35.714521   66801 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:35.714567   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:36.568124   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 18:28:36.568177   66801 cache_images.go:123] Successfully loaded all cached images
	I1028 18:28:36.568185   66801 cache_images.go:92] duration metric: took 15.220649269s to LoadCachedImages
	I1028 18:28:36.568199   66801 kubeadm.go:934] updating node { 192.168.61.78 8443 v1.31.2 crio true true} ...
	I1028 18:28:36.568310   66801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-051152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:36.568383   66801 ssh_runner.go:195] Run: crio config
	I1028 18:28:36.613400   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:36.613425   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:36.613435   66801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:36.613454   66801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.78 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-051152 NodeName:no-preload-051152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:28:36.613596   66801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-051152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:36.613669   66801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:28:36.624493   66801 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:36.624553   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:36.633828   66801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 18:28:36.649661   66801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:36.665454   66801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1028 18:28:36.681280   66801 ssh_runner.go:195] Run: grep 192.168.61.78	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:36.685010   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:36.697177   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:36.823266   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:36.840346   66801 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152 for IP: 192.168.61.78
	I1028 18:28:36.840366   66801 certs.go:194] generating shared ca certs ...
	I1028 18:28:36.840380   66801 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:36.840538   66801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:36.840578   66801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:36.840586   66801 certs.go:256] generating profile certs ...
	I1028 18:28:36.840661   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.key
	I1028 18:28:36.840722   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key.262d982c
	I1028 18:28:36.840758   66801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key
	I1028 18:28:36.840859   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:36.840892   66801 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:36.840902   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:36.840922   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:36.840943   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:36.840971   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:36.841025   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:36.841818   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:36.881548   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:36.907084   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:36.947810   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:36.976268   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 18:28:37.003795   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:28:37.036252   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:37.059731   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:28:37.083467   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:37.106397   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:37.128719   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:37.151133   66801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:37.166917   66801 ssh_runner.go:195] Run: openssl version
	I1028 18:28:37.172387   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:37.182117   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186329   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186389   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.191925   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:37.201799   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:37.211620   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215889   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215923   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.221588   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:37.231983   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:37.242291   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246869   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246904   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.252408   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:37.262946   66801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:37.267334   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:37.273164   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:37.278831   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:37.284778   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:37.290547   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:37.296195   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:37.301915   66801 kubeadm.go:392] StartCluster: {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:37.301986   66801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:37.302037   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.345115   66801 cri.go:89] found id: ""
	I1028 18:28:37.345185   66801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:37.355312   66801 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:37.355328   66801 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:37.355370   66801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:37.364777   66801 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:37.366056   66801 kubeconfig.go:125] found "no-preload-051152" server: "https://192.168.61.78:8443"
	I1028 18:28:37.368829   66801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:37.378010   66801 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.78
	I1028 18:28:37.378039   66801 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:37.378047   66801 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:37.378083   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.413442   66801 cri.go:89] found id: ""
	I1028 18:28:37.413522   66801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:37.428998   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:37.438365   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:37.438391   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:37.438442   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:37.447260   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:37.447310   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:37.456615   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:37.465292   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:37.465351   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:37.474382   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.482957   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:37.483012   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.491991   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:37.500635   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:37.500709   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:37.509632   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:37.518808   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:37.642796   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:40.421350   67489 start.go:364] duration metric: took 3m5.178550845s to acquireMachinesLock for "default-k8s-diff-port-692033"
	I1028 18:28:40.421416   67489 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:40.421430   67489 fix.go:54] fixHost starting: 
	I1028 18:28:40.421843   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:40.421894   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:40.439583   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I1028 18:28:40.440133   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:40.440679   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:28:40.440701   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:40.441025   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:40.441198   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:40.441359   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:28:40.443029   67489 fix.go:112] recreateIfNeeded on default-k8s-diff-port-692033: state=Stopped err=<nil>
	I1028 18:28:40.443055   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	W1028 18:28:40.443202   67489 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:40.445489   67489 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-692033" ...
	I1028 18:28:39.079052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079556   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079584   67149 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:28:39.079593   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:28:39.079888   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.079919   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | skip adding static IP to network mk-old-k8s-version-223868 - found existing host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"}
	I1028 18:28:39.079935   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:28:39.079955   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:28:39.079971   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:28:39.082041   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.082354   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082480   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:28:39.082500   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:28:39.082528   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:39.082555   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:28:39.082567   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:28:39.204523   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:39.204883   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:28:39.205526   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.208073   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208434   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.208478   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208709   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:28:39.208907   67149 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:39.208926   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:39.209144   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.211109   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211407   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.211437   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.211739   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.211888   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.212033   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.212218   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.212388   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.212398   67149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:39.316528   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:39.316566   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.316813   67149 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:28:39.316841   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.317028   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.319389   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319687   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.319713   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319836   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.320017   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320167   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320310   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.320458   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.320642   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.320656   67149 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:28:39.439149   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:28:39.439179   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.441957   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442268   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.442300   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442528   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.442736   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.442940   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.443122   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.443304   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.443525   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.443550   67149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:39.561619   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:39.561651   67149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:39.561702   67149 buildroot.go:174] setting up certificates
	I1028 18:28:39.561716   67149 provision.go:84] configureAuth start
	I1028 18:28:39.561731   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.562015   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.564838   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565195   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.565229   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565373   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.567875   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568262   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.568287   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568452   67149 provision.go:143] copyHostCerts
	I1028 18:28:39.568534   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:39.568553   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:39.568621   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:39.568745   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:39.568768   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:39.568810   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:39.568899   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:39.568911   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:39.568937   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:39.569006   67149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:28:39.786398   67149 provision.go:177] copyRemoteCerts
	I1028 18:28:39.786449   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:39.786482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.789048   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789373   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.789417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789535   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.789733   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.789884   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.790013   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:39.871816   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:39.902889   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:28:39.932633   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:39.958581   67149 provision.go:87] duration metric: took 396.851161ms to configureAuth
	I1028 18:28:39.958609   67149 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:39.958796   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:28:39.958881   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.961667   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962019   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.962044   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962240   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.962468   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962671   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962850   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.963037   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.963220   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.963239   67149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:40.179808   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:40.179843   67149 machine.go:96] duration metric: took 970.91659ms to provisionDockerMachine
	I1028 18:28:40.179857   67149 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:28:40.179869   67149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:40.179917   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.180287   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:40.180319   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.183011   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183383   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.183411   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183578   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.183770   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.183964   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.184114   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.270445   67149 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:40.275798   67149 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:40.275825   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:40.275898   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:40.275995   67149 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:40.276108   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:40.287529   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:40.310860   67149 start.go:296] duration metric: took 130.989944ms for postStartSetup
	I1028 18:28:40.310899   67149 fix.go:56] duration metric: took 20.417730265s for fixHost
	I1028 18:28:40.310925   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.313613   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.313967   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.314000   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.314175   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.314354   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314518   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314692   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.314862   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:40.315021   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:40.315032   67149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:40.421204   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140120.384024791
	
	I1028 18:28:40.421225   67149 fix.go:216] guest clock: 1730140120.384024791
	I1028 18:28:40.421235   67149 fix.go:229] Guest: 2024-10-28 18:28:40.384024791 +0000 UTC Remote: 2024-10-28 18:28:40.310903937 +0000 UTC m=+244.300202669 (delta=73.120854ms)
	I1028 18:28:40.421259   67149 fix.go:200] guest clock delta is within tolerance: 73.120854ms
	I1028 18:28:40.421265   67149 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 20.528130845s
	I1028 18:28:40.421297   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.421574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:40.424700   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425088   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.425116   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425286   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.425971   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426188   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426266   67149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:40.426340   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.426604   67149 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:40.426632   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.429408   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429569   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429807   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.429841   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429950   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430059   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.430092   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.430123   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430236   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430383   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430459   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430616   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.430614   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.509203   67149 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:40.540019   67149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:40.701732   67149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:40.710264   67149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:40.710354   67149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:40.731373   67149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:40.731398   67149 start.go:495] detecting cgroup driver to use...
	I1028 18:28:40.731465   67149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:40.751312   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:40.766288   67149 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:40.766399   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:40.783995   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:40.800295   67149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:40.940688   67149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:41.101493   67149 docker.go:233] disabling docker service ...
	I1028 18:28:41.101562   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:41.123350   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:41.141744   67149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:41.279020   67149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:41.414748   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:41.429469   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:41.448611   67149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:28:41.448669   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.460766   67149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:41.460842   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.473021   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.485888   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.497498   67149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:41.509250   67149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:41.519701   67149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:41.519754   67149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:41.534596   67149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:41.544814   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:41.681203   67149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:41.786879   67149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:41.786957   67149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:41.791981   67149 start.go:563] Will wait 60s for crictl version
	I1028 18:28:41.792041   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:41.796034   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:41.839867   67149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:41.839958   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.873029   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.904534   67149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:28:38.508232   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.720400   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.784720   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.892007   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:38.892083   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.392953   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.892228   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.912702   66801 api_server.go:72] duration metric: took 1.020696043s to wait for apiserver process to appear ...
	I1028 18:28:39.912728   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:28:39.912749   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:39.913221   66801 api_server.go:269] stopped: https://192.168.61.78:8443/healthz: Get "https://192.168.61.78:8443/healthz": dial tcp 192.168.61.78:8443: connect: connection refused
	I1028 18:28:40.413025   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:40.446984   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Start
	I1028 18:28:40.447191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring networks are active...
	I1028 18:28:40.447998   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network default is active
	I1028 18:28:40.448350   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network mk-default-k8s-diff-port-692033 is active
	I1028 18:28:40.448884   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Getting domain xml...
	I1028 18:28:40.449664   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Creating domain...
	I1028 18:28:41.740010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting to get IP...
	I1028 18:28:41.740827   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741273   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:41.741192   68341 retry.go:31] will retry after 276.06097ms: waiting for machine to come up
	I1028 18:28:42.018700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019135   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019159   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.019089   68341 retry.go:31] will retry after 318.252876ms: waiting for machine to come up
	I1028 18:28:42.338630   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339287   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339312   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.339205   68341 retry.go:31] will retry after 428.196122ms: waiting for machine to come up
	I1028 18:28:42.768656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769225   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769248   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.769134   68341 retry.go:31] will retry after 483.256928ms: waiting for machine to come up
	I1028 18:28:43.253739   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254304   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254353   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.254220   68341 retry.go:31] will retry after 577.932805ms: waiting for machine to come up
	I1028 18:28:43.834355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.834976   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.835021   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.834945   68341 retry.go:31] will retry after 639.531065ms: waiting for machine to come up
	I1028 18:28:44.475727   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476299   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476331   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:44.476248   68341 retry.go:31] will retry after 1.171398436s: waiting for machine to come up
	I1028 18:28:43.473059   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.473096   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.473113   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.588338   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.588371   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.913612   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.918557   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:43.918598   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.412902   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.425930   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.425971   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.913482   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.926092   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.926126   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:45.413673   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:45.419384   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:28:45.430384   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:28:45.430431   66801 api_server.go:131] duration metric: took 5.517694037s to wait for apiserver health ...
	I1028 18:28:45.430442   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:45.430450   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:45.432587   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:28:41.906005   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:41.909278   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909683   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:41.909741   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909996   67149 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:41.915405   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:41.931747   67149 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:41.931886   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:28:41.931944   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:41.987909   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:41.987966   67149 ssh_runner.go:195] Run: which lz4
	I1028 18:28:41.993527   67149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:28:41.998982   67149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:28:41.999014   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:28:43.643480   67149 crio.go:462] duration metric: took 1.649982959s to copy over tarball
	I1028 18:28:43.643559   67149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:28:45.433946   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:28:45.453114   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:28:45.479255   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:28:45.497020   66801 system_pods.go:59] 8 kube-system pods found
	I1028 18:28:45.497072   66801 system_pods.go:61] "coredns-7c65d6cfc9-74b6t" [b6a550da-7c40-4283-b49e-1ab29e652037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:28:45.497084   66801 system_pods.go:61] "etcd-no-preload-051152" [d5b31ded-95ce-4dde-ba88-e653dfdb8d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:28:45.497097   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [95d0acb0-4d58-4307-9f4f-10f920ff4745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:28:45.497105   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [722530e1-1d76-40dc-8a24-fe79d0167835] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:28:45.497112   66801 system_pods.go:61] "kube-proxy-kg42f" [7891354b-a501-45c4-b15c-cf6d29e3721f] Running
	I1028 18:28:45.497121   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [c658808c-79c2-4b8e-b72c-0b2d8e058ab4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:28:45.497130   66801 system_pods.go:61] "metrics-server-6867b74b74-vgd8k" [626b71a2-6904-409f-9274-6963a94e6ac2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:28:45.497137   66801 system_pods.go:61] "storage-provisioner" [39bf84c9-9c6f-4048-8a11-460fb12f622b] Running
	I1028 18:28:45.497146   66801 system_pods.go:74] duration metric: took 17.863894ms to wait for pod list to return data ...
	I1028 18:28:45.497160   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:28:45.501945   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:28:45.501977   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:28:45.501993   66801 node_conditions.go:105] duration metric: took 4.827279ms to run NodePressure ...
	I1028 18:28:45.502014   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:45.835429   66801 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840823   66801 kubeadm.go:739] kubelet initialised
	I1028 18:28:45.840852   66801 kubeadm.go:740] duration metric: took 5.391212ms waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840862   66801 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:28:45.846565   66801 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:45.648994   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649559   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649587   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:45.649512   68341 retry.go:31] will retry after 1.258585317s: waiting for machine to come up
	I1028 18:28:46.909541   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909955   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:46.909911   68341 retry.go:31] will retry after 1.827150306s: waiting for machine to come up
	I1028 18:28:48.738193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738696   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738725   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:48.738653   68341 retry.go:31] will retry after 1.738249889s: waiting for machine to come up
	I1028 18:28:46.758767   67149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115173801s)
	I1028 18:28:46.758810   67149 crio.go:469] duration metric: took 3.115300284s to extract the tarball
	I1028 18:28:46.758821   67149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:28:46.816906   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:46.864347   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:46.864376   67149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:46.864499   67149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.864564   67149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.864623   67149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.864639   67149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.864674   67149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.864686   67149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:28:46.864710   67149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.864529   67149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:46.866383   67149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.866445   67149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.866493   67149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.866579   67149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.866795   67149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.867073   67149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.867095   67149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:28:46.867488   67149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.043358   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.053844   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.055684   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.056812   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.066211   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.090931   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.104900   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:28:47.141214   67149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:28:47.141260   67149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.141307   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202804   67149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:28:47.202863   67149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.202873   67149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:28:47.202903   67149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.202915   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202944   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.234811   67149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:28:47.234853   67149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.234900   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.236717   67149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:28:47.236751   67149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.236798   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.243872   67149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:28:47.243918   67149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.243971   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260210   67149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:28:47.260253   67149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:28:47.260256   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.260293   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260398   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.260438   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.260456   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.260517   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.260559   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413617   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.413776   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.413804   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413825   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.414063   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.414103   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.414150   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.544933   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.581577   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.582079   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.582161   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.582206   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.582344   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.582819   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.662237   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:28:47.736212   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.739757   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:28:47.739928   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:28:47.739802   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:28:47.739812   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:28:47.739841   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:28:47.783578   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:28:49.121698   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:49.266583   67149 cache_images.go:92] duration metric: took 2.402188013s to LoadCachedImages
	W1028 18:28:49.266686   67149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 18:28:49.266702   67149 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:28:49.266828   67149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:49.266918   67149 ssh_runner.go:195] Run: crio config
	I1028 18:28:49.318146   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:28:49.318167   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:49.318176   67149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:49.318193   67149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:28:49.318310   67149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:49.318371   67149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:28:49.329249   67149 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:49.329339   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:49.339379   67149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:28:49.359216   67149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:49.378289   67149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:28:49.397766   67149 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:49.401788   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:49.418204   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:49.558031   67149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:49.575443   67149 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:28:49.575469   67149 certs.go:194] generating shared ca certs ...
	I1028 18:28:49.575489   67149 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:49.575693   67149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:49.575746   67149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:49.575756   67149 certs.go:256] generating profile certs ...
	I1028 18:28:49.575859   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:28:49.575914   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:28:49.575951   67149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:28:49.576058   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:49.576092   67149 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:49.576103   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:49.576131   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:49.576162   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:49.576186   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:49.576238   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:49.576994   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:49.622814   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:49.653690   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:49.678975   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:49.707340   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:28:49.744836   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:28:49.776367   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:49.818999   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:28:49.847531   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:49.871924   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:49.897751   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:49.923267   67149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:49.939805   67149 ssh_runner.go:195] Run: openssl version
	I1028 18:28:49.945611   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:49.956191   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960862   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960916   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.966701   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:49.977882   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:49.990873   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995751   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995810   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:50.001891   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:50.013508   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:50.028132   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034144   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034217   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.041768   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:50.054079   67149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:50.058983   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:50.064802   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:50.070790   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:50.077090   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:50.083149   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:50.089232   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:50.095205   67149 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:50.095338   67149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:50.095411   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.139777   67149 cri.go:89] found id: ""
	I1028 18:28:50.139854   67149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:50.151967   67149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:50.151986   67149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:50.152040   67149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:50.163454   67149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:50.164876   67149 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:28:50.165798   67149 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-223868" cluster setting kubeconfig missing "old-k8s-version-223868" context setting]
	I1028 18:28:50.167121   67149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:50.169545   67149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:50.179447   67149 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.194
	I1028 18:28:50.179477   67149 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:50.179490   67149 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:50.179542   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.213891   67149 cri.go:89] found id: ""
	I1028 18:28:50.213963   67149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:50.231491   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:50.241752   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:50.241775   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:50.241829   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:50.252015   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:50.252075   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:50.263032   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:50.273500   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:50.273564   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:50.283603   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.293521   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:50.293567   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.303701   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:50.316202   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:50.316269   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:50.327841   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:50.341366   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:50.469586   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:49.414188   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:51.855115   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:50.478658   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479208   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479237   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:50.479151   68341 retry.go:31] will retry after 2.362711935s: waiting for machine to come up
	I1028 18:28:52.842907   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843290   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843314   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:52.843250   68341 retry.go:31] will retry after 2.561710525s: waiting for machine to come up
	I1028 18:28:51.507608   67149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037983659s)
	I1028 18:28:51.507645   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.733141   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.842228   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.947336   67149 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:51.947430   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.447618   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.947814   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.448476   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.947571   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.448371   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.947700   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.447735   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.948435   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.857886   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:54.862972   66801 pod_ready.go:93] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:54.863005   66801 pod_ready.go:82] duration metric: took 9.016413449s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:54.863019   66801 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869043   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:55.869076   66801 pod_ready.go:82] duration metric: took 1.006049217s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869091   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874842   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.874865   66801 pod_ready.go:82] duration metric: took 2.005766936s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874875   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878913   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.878930   66801 pod_ready.go:82] duration metric: took 4.049698ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878937   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889897   66801 pod_ready.go:93] pod "kube-proxy-kg42f" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.889913   66801 pod_ready.go:82] duration metric: took 10.971269ms for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889921   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.407934   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408336   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408362   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:55.408274   68341 retry.go:31] will retry after 3.762790995s: waiting for machine to come up
	I1028 18:28:59.173489   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173900   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Found IP for machine: 192.168.39.215
	I1028 18:28:59.173923   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has current primary IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173929   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserving static IP address...
	I1028 18:28:59.174320   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.174343   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | skip adding static IP to network mk-default-k8s-diff-port-692033 - found existing host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"}
	I1028 18:28:59.174355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserved static IP address: 192.168.39.215
	I1028 18:28:59.174365   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for SSH to be available...
	I1028 18:28:59.174376   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Getting to WaitForSSH function...
	I1028 18:28:59.176441   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176755   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.176786   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176913   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH client type: external
	I1028 18:28:59.176936   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa (-rw-------)
	I1028 18:28:59.176958   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:59.176970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | About to run SSH command:
	I1028 18:28:59.176982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | exit 0
	I1028 18:28:59.300272   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:59.300649   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetConfigRaw
	I1028 18:28:59.301261   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.303505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.303832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.303857   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.304080   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:28:59.304287   67489 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:59.304310   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:59.304535   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.306713   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307008   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.307042   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307187   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.307348   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307627   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.307768   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.307936   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.307946   67489 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:59.412710   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:59.412743   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413009   67489 buildroot.go:166] provisioning hostname "default-k8s-diff-port-692033"
	I1028 18:28:59.413041   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.415772   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416048   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.416070   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416251   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.416437   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416728   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.416847   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.417030   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.417041   67489 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-692033 && echo "default-k8s-diff-port-692033" | sudo tee /etc/hostname
	I1028 18:28:59.538491   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-692033
	
	I1028 18:28:59.538518   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.540842   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541144   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.541173   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.541527   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541684   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541815   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.541964   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.542123   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.542138   67489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-692033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-692033/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-692033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:59.657448   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:59.657480   67489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:59.657524   67489 buildroot.go:174] setting up certificates
	I1028 18:28:59.657539   67489 provision.go:84] configureAuth start
	I1028 18:28:59.657556   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.657832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.660465   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660797   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.660840   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660949   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.663393   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663801   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.663830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663977   67489 provision.go:143] copyHostCerts
	I1028 18:28:59.664049   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:59.664062   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:59.664117   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:59.664217   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:59.664228   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:59.664250   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:59.664300   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:59.664308   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:59.664327   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:59.664403   67489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-692033 san=[127.0.0.1 192.168.39.215 default-k8s-diff-port-692033 localhost minikube]
	I1028 18:28:59.882619   67489 provision.go:177] copyRemoteCerts
	I1028 18:28:59.882672   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:59.882695   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.885303   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.885686   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885927   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.886121   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.886278   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.886382   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:28:59.975231   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:00.000412   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 18:29:00.024424   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:29:00.048646   67489 provision.go:87] duration metric: took 391.090444ms to configureAuth
	I1028 18:29:00.048674   67489 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:00.048884   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:00.048970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.051793   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052156   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.052185   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.052532   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052729   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052894   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.053080   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.053241   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.053254   67489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:00.525285   66600 start.go:364] duration metric: took 54.917560334s to acquireMachinesLock for "embed-certs-021370"
	I1028 18:29:00.525349   66600 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:29:00.525359   66600 fix.go:54] fixHost starting: 
	I1028 18:29:00.525740   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:29:00.525778   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:29:00.544614   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I1028 18:29:00.544976   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:29:00.545433   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:29:00.545455   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:29:00.545842   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:29:00.546046   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:00.546230   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:29:00.547770   66600 fix.go:112] recreateIfNeeded on embed-certs-021370: state=Stopped err=<nil>
	I1028 18:29:00.547794   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	W1028 18:29:00.547957   66600 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:29:00.549753   66600 out.go:177] * Restarting existing kvm2 VM for "embed-certs-021370" ...
	I1028 18:28:56.447531   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.947711   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.447782   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.947642   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.948256   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.447558   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.948018   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.448186   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.947565   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.280618   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:00.280641   67489 machine.go:96] duration metric: took 976.341252ms to provisionDockerMachine
	I1028 18:29:00.280653   67489 start.go:293] postStartSetup for "default-k8s-diff-port-692033" (driver="kvm2")
	I1028 18:29:00.280669   67489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:00.280690   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.281004   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:00.281044   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.283656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.283977   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.284010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.284170   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.284382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.284549   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.284692   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.372947   67489 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:00.377456   67489 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:00.377480   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:00.377547   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:00.377646   67489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:00.377762   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:00.388767   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:00.413520   67489 start.go:296] duration metric: took 132.852709ms for postStartSetup
	I1028 18:29:00.413557   67489 fix.go:56] duration metric: took 19.992127182s for fixHost
	I1028 18:29:00.413578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.416040   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416377   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.416405   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416553   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.416756   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.416930   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.417065   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.417228   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.417412   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.417424   67489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:00.525082   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140140.492840769
	
	I1028 18:29:00.525105   67489 fix.go:216] guest clock: 1730140140.492840769
	I1028 18:29:00.525114   67489 fix.go:229] Guest: 2024-10-28 18:29:00.492840769 +0000 UTC Remote: 2024-10-28 18:29:00.413561948 +0000 UTC m=+205.301669628 (delta=79.278821ms)
	I1028 18:29:00.525169   67489 fix.go:200] guest clock delta is within tolerance: 79.278821ms
	I1028 18:29:00.525180   67489 start.go:83] releasing machines lock for "default-k8s-diff-port-692033", held for 20.103791447s
	I1028 18:29:00.525214   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.525495   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:00.528023   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528385   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.528415   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529038   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529287   67489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:00.529323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.529380   67489 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:00.529403   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.531822   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532022   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532163   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532294   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532443   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532481   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532488   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532612   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532680   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.532830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532830   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.532965   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.533103   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.609362   67489 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:00.636444   67489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:00.785916   67489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:00.792198   67489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:00.792279   67489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:00.812095   67489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:00.812124   67489 start.go:495] detecting cgroup driver to use...
	I1028 18:29:00.812190   67489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:00.829536   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:00.844021   67489 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:00.844090   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:00.858561   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:00.873128   67489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:00.990494   67489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:01.148650   67489 docker.go:233] disabling docker service ...
	I1028 18:29:01.148729   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:01.162487   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:01.177407   67489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:01.303665   67489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:01.430019   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:01.443822   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:01.462768   67489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:01.462830   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.473669   67489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:01.473737   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.484364   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.496220   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.507216   67489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:01.518848   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.534216   67489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.554294   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.565095   67489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:01.574547   67489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:01.574614   67489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:01.596531   67489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:01.606858   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:01.740272   67489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:01.844969   67489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:01.845053   67489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:01.850004   67489 start.go:563] Will wait 60s for crictl version
	I1028 18:29:01.850056   67489 ssh_runner.go:195] Run: which crictl
	I1028 18:29:01.854032   67489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:01.893281   67489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:01.893367   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.923557   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.956282   67489 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:00.551001   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Start
	I1028 18:29:00.551172   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring networks are active...
	I1028 18:29:00.551820   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network default is active
	I1028 18:29:00.552130   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network mk-embed-certs-021370 is active
	I1028 18:29:00.552482   66600 main.go:141] libmachine: (embed-certs-021370) Getting domain xml...
	I1028 18:29:00.553186   66600 main.go:141] libmachine: (embed-certs-021370) Creating domain...
	I1028 18:29:01.830016   66600 main.go:141] libmachine: (embed-certs-021370) Waiting to get IP...
	I1028 18:29:01.831046   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:01.831522   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:01.831630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:01.831518   68528 retry.go:31] will retry after 300.306268ms: waiting for machine to come up
	I1028 18:29:02.132901   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.133350   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.133383   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.133293   68528 retry.go:31] will retry after 383.232008ms: waiting for machine to come up
	I1028 18:29:02.518736   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.519274   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.519299   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.519241   68528 retry.go:31] will retry after 354.591942ms: waiting for machine to come up
	I1028 18:29:02.875813   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.876360   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.876397   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.876325   68528 retry.go:31] will retry after 529.444037ms: waiting for machine to come up
	I1028 18:28:58.895888   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:58.895918   66801 pod_ready.go:82] duration metric: took 1.005990705s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:58.895932   66801 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:00.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:02.903390   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:01.957748   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:01.960967   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:01.961382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961635   67489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:01.966300   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:01.979786   67489 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:01.979899   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:01.979957   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:02.020659   67489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:02.020716   67489 ssh_runner.go:195] Run: which lz4
	I1028 18:29:02.024772   67489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:02.030183   67489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:02.030206   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:03.449423   67489 crio.go:462] duration metric: took 1.424673911s to copy over tarball
	I1028 18:29:03.449498   67489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:01.447557   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.947946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.448522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.947533   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.447522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.948025   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.448136   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.948157   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.447635   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.947987   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.407835   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:03.408366   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:03.408390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:03.408265   68528 retry.go:31] will retry after 680.005296ms: waiting for machine to come up
	I1028 18:29:04.089802   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.090390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.090409   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.090338   68528 retry.go:31] will retry after 833.681725ms: waiting for machine to come up
	I1028 18:29:04.925788   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.926278   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.926298   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.926227   68528 retry.go:31] will retry after 1.050194845s: waiting for machine to come up
	I1028 18:29:05.978270   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:05.978715   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:05.978742   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:05.978669   68528 retry.go:31] will retry after 1.416773018s: waiting for machine to come up
	I1028 18:29:07.397367   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:07.397843   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:07.397876   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:07.397787   68528 retry.go:31] will retry after 1.621623459s: waiting for machine to come up
	I1028 18:29:04.903465   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:06.903931   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:05.622217   67489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172685001s)
	I1028 18:29:05.622253   67489 crio.go:469] duration metric: took 2.172801769s to extract the tarball
	I1028 18:29:05.622264   67489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:05.660585   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:05.705484   67489 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:05.705510   67489 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:05.705520   67489 kubeadm.go:934] updating node { 192.168.39.215 8444 v1.31.2 crio true true} ...
	I1028 18:29:05.705634   67489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-692033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:05.705725   67489 ssh_runner.go:195] Run: crio config
	I1028 18:29:05.760618   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:05.760649   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:05.760661   67489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:05.760690   67489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-692033 NodeName:default-k8s-diff-port-692033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:05.760858   67489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-692033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:05.760936   67489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:05.771392   67489 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:05.771464   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:05.780926   67489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1028 18:29:05.797951   67489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:05.814159   67489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1028 18:29:05.830723   67489 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:05.835163   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:05.847192   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:05.972201   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:05.990475   67489 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033 for IP: 192.168.39.215
	I1028 18:29:05.990492   67489 certs.go:194] generating shared ca certs ...
	I1028 18:29:05.990511   67489 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:05.990711   67489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:05.990764   67489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:05.990776   67489 certs.go:256] generating profile certs ...
	I1028 18:29:05.990875   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.key
	I1028 18:29:05.990991   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key.81b9981a
	I1028 18:29:05.991052   67489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key
	I1028 18:29:05.991218   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:05.991268   67489 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:05.991283   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:05.991317   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:05.991359   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:05.991405   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:05.991481   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:05.992294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:06.033938   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:06.070407   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:06.115934   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:06.144600   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 18:29:06.169202   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:06.196294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:06.219384   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:29:06.242169   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:06.266506   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:06.290175   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:06.313006   67489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:06.329076   67489 ssh_runner.go:195] Run: openssl version
	I1028 18:29:06.335322   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:06.346021   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350401   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350464   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.356134   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:06.366765   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:06.377486   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381920   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381978   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.387492   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:06.398392   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:06.413238   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418376   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418429   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.423997   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:06.436170   67489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:06.440853   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:06.446851   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:06.452980   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:06.458973   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:06.465088   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:06.470776   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:06.476462   67489 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:06.476588   67489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:06.476638   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.519820   67489 cri.go:89] found id: ""
	I1028 18:29:06.519884   67489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:06.530091   67489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:06.530110   67489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:06.530171   67489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:06.539807   67489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:06.540946   67489 kubeconfig.go:125] found "default-k8s-diff-port-692033" server: "https://192.168.39.215:8444"
	I1028 18:29:06.543088   67489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:06.552354   67489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I1028 18:29:06.552379   67489 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:06.552389   67489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:06.552445   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.586545   67489 cri.go:89] found id: ""
	I1028 18:29:06.586611   67489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:06.603418   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:06.612856   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:06.612876   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:06.612921   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:29:06.621852   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:06.621900   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:06.631132   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:29:06.640088   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:06.640158   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:06.651007   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.660034   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:06.660104   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.669587   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:29:06.678863   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:06.678937   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:06.688820   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:06.698470   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:06.820432   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.030810   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210339958s)
	I1028 18:29:08.030839   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.255000   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.321500   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.412775   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:08.412854   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.913648   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.413011   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.459009   67489 api_server.go:72] duration metric: took 1.046232596s to wait for apiserver process to appear ...
	I1028 18:29:09.459041   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:09.459062   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:09.459626   67489 api_server.go:269] stopped: https://192.168.39.215:8444/healthz: Get "https://192.168.39.215:8444/healthz": dial tcp 192.168.39.215:8444: connect: connection refused
	I1028 18:29:09.960128   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:06.447581   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.947550   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.447977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.947491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.447960   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.947662   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.448201   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.947753   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.448116   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.948175   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.020419   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:09.020867   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:09.020899   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:09.020814   68528 retry.go:31] will retry after 2.2230034s: waiting for machine to come up
	I1028 18:29:11.245136   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:11.245630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:11.245657   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:11.245595   68528 retry.go:31] will retry after 2.153898764s: waiting for machine to come up
	I1028 18:29:09.403596   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:11.903702   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:12.135346   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.135381   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.135394   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.166207   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.166234   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.459631   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.473153   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.473183   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:12.959778   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.969281   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.969320   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:13.459913   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:13.464362   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:29:13.471925   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:13.471953   67489 api_server.go:131] duration metric: took 4.012904227s to wait for apiserver health ...
	I1028 18:29:13.471964   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:13.471971   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:13.473908   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:13.475283   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:13.487393   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:13.532627   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:13.544945   67489 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:13.544982   67489 system_pods.go:61] "coredns-7c65d6cfc9-ctx9z" [7067f349-3a22-468d-bd9d-19d057eb43f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:13.544993   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [313161ff-f30f-4e25-978d-9aa2eba7fc44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:13.545004   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [e9a66e8e-946b-4365-bd63-3adfdd75e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:13.545014   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [0e682f68-2f9a-4bf3-bbe4-3a6b1ef6778d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:13.545021   67489 system_pods.go:61] "kube-proxy-86rll" [d34f46c6-3227-40c9-ac97-066b98bfce32] Running
	I1028 18:29:13.545029   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [b9058969-31e2-4249-862f-ef5de7784adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:13.545043   67489 system_pods.go:61] "metrics-server-6867b74b74-dz4nl" [833c650e-5f5d-46a1-9ae1-64619c53a92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:13.545047   67489 system_pods.go:61] "storage-provisioner" [342db8fa-7873-47b0-a5a6-52cde2e19d47] Running
	I1028 18:29:13.545053   67489 system_pods.go:74] duration metric: took 12.403166ms to wait for pod list to return data ...
	I1028 18:29:13.545060   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:13.548591   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:13.548619   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:13.548632   67489 node_conditions.go:105] duration metric: took 3.567222ms to run NodePressure ...
	I1028 18:29:13.548649   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:13.818718   67489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826139   67489 kubeadm.go:739] kubelet initialised
	I1028 18:29:13.826161   67489 kubeadm.go:740] duration metric: took 7.415257ms waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826170   67489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:13.833418   67489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.838793   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838820   67489 pod_ready.go:82] duration metric: took 5.377698ms for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.838831   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838840   67489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.843172   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843195   67489 pod_ready.go:82] duration metric: took 4.34633ms for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.843203   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843209   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.847581   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847615   67489 pod_ready.go:82] duration metric: took 4.389898ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.847630   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847642   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:11.448521   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.947592   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.448427   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.948413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.448390   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.948518   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.447929   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.948106   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.948236   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.401547   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:13.402054   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:13.402083   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:13.402028   68528 retry.go:31] will retry after 2.345507901s: waiting for machine to come up
	I1028 18:29:15.749122   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:15.749485   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:15.749502   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:15.749451   68528 retry.go:31] will retry after 2.974576274s: waiting for machine to come up
	I1028 18:29:13.903930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.403934   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:15.858338   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:18.354245   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.447535   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.948117   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.448197   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.948491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.948393   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.448406   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.947788   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.448100   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.947907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.727508   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.727990   66600 main.go:141] libmachine: (embed-certs-021370) Found IP for machine: 192.168.50.62
	I1028 18:29:18.728011   66600 main.go:141] libmachine: (embed-certs-021370) Reserving static IP address...
	I1028 18:29:18.728028   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has current primary IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.728447   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.728478   66600 main.go:141] libmachine: (embed-certs-021370) Reserved static IP address: 192.168.50.62
	I1028 18:29:18.728497   66600 main.go:141] libmachine: (embed-certs-021370) DBG | skip adding static IP to network mk-embed-certs-021370 - found existing host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"}
	I1028 18:29:18.728510   66600 main.go:141] libmachine: (embed-certs-021370) Waiting for SSH to be available...
	I1028 18:29:18.728520   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Getting to WaitForSSH function...
	I1028 18:29:18.730574   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731031   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.731069   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731227   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH client type: external
	I1028 18:29:18.731248   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa (-rw-------)
	I1028 18:29:18.731282   66600 main.go:141] libmachine: (embed-certs-021370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:29:18.731310   66600 main.go:141] libmachine: (embed-certs-021370) DBG | About to run SSH command:
	I1028 18:29:18.731327   66600 main.go:141] libmachine: (embed-certs-021370) DBG | exit 0
	I1028 18:29:18.860213   66600 main.go:141] libmachine: (embed-certs-021370) DBG | SSH cmd err, output: <nil>: 
	I1028 18:29:18.860619   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetConfigRaw
	I1028 18:29:18.861235   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:18.863576   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.863932   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.863956   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.864224   66600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/config.json ...
	I1028 18:29:18.864465   66600 machine.go:93] provisionDockerMachine start ...
	I1028 18:29:18.864521   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:18.864720   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.866951   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867314   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.867349   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867511   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.867665   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867811   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867941   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.868072   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.868230   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.868239   66600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:29:18.972695   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:29:18.972729   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.972970   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:29:18.973000   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.973209   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.975608   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.975889   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.975915   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.976082   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.976269   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976401   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976505   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.976625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.976796   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.976809   66600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-021370 && echo "embed-certs-021370" | sudo tee /etc/hostname
	I1028 18:29:19.094622   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-021370
	
	I1028 18:29:19.094655   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.097110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097436   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.097460   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097639   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.097817   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.097967   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.098121   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.098309   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.098517   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.098533   66600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-021370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-021370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-021370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:29:19.218088   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:29:19.218112   66600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:29:19.218140   66600 buildroot.go:174] setting up certificates
	I1028 18:29:19.218150   66600 provision.go:84] configureAuth start
	I1028 18:29:19.218159   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:19.218411   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:19.221093   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221441   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.221469   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221641   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.223628   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.223908   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.223928   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.224085   66600 provision.go:143] copyHostCerts
	I1028 18:29:19.224155   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:29:19.224185   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:29:19.224252   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:29:19.224380   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:29:19.224390   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:29:19.224422   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:29:19.224532   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:29:19.224542   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:29:19.224570   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:29:19.224655   66600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-021370 san=[127.0.0.1 192.168.50.62 embed-certs-021370 localhost minikube]
	I1028 18:29:19.402860   66600 provision.go:177] copyRemoteCerts
	I1028 18:29:19.402925   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:29:19.402954   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.405556   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.405904   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.405939   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.406100   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.406265   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.406391   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.406494   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.486543   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:19.510790   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:29:19.534037   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:29:19.557509   66600 provision.go:87] duration metric: took 339.349044ms to configureAuth
	I1028 18:29:19.557531   66600 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:19.557681   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:19.557745   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.560240   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560594   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.560623   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560757   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.560931   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561110   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561320   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.561490   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.561651   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.561664   66600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:19.781270   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:19.781304   66600 machine.go:96] duration metric: took 916.814114ms to provisionDockerMachine
	I1028 18:29:19.781317   66600 start.go:293] postStartSetup for "embed-certs-021370" (driver="kvm2")
	I1028 18:29:19.781327   66600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:19.781345   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:19.781664   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:19.781690   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.784176   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784509   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.784538   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784667   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.784854   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.785028   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.785171   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.867396   66600 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:19.871516   66600 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:19.871542   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:19.871630   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:19.871717   66600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:19.871799   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:19.882017   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:19.906531   66600 start.go:296] duration metric: took 125.203636ms for postStartSetup
	I1028 18:29:19.906562   66600 fix.go:56] duration metric: took 19.381205641s for fixHost
	I1028 18:29:19.906581   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.909285   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909610   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.909640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909778   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.909980   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910311   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910444   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.910625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.910788   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.910803   66600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:20.017311   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140159.989127147
	
	I1028 18:29:20.017339   66600 fix.go:216] guest clock: 1730140159.989127147
	I1028 18:29:20.017346   66600 fix.go:229] Guest: 2024-10-28 18:29:19.989127147 +0000 UTC Remote: 2024-10-28 18:29:19.906566181 +0000 UTC m=+356.890524496 (delta=82.560966ms)
	I1028 18:29:20.017368   66600 fix.go:200] guest clock delta is within tolerance: 82.560966ms
	I1028 18:29:20.017374   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 19.492049852s
	I1028 18:29:20.017396   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.017657   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:20.020286   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020680   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.020704   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020816   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021307   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021491   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021577   66600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:20.021616   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.021746   66600 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:20.021767   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.024157   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024429   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024511   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024533   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024679   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.024856   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.024880   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024896   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.025019   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025070   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.025160   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.025201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.025304   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025443   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.101316   66600 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:20.124859   66600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:20.268773   66600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:20.275277   66600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:20.275358   66600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:20.291972   66600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:20.291999   66600 start.go:495] detecting cgroup driver to use...
	I1028 18:29:20.292066   66600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:20.311389   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:20.325385   66600 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:20.325434   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:20.339246   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:20.353759   66600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:20.477639   66600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:20.622752   66600 docker.go:233] disabling docker service ...
	I1028 18:29:20.622825   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:20.637258   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:20.650210   66600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:20.801036   66600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:20.945078   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:20.959494   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:20.977797   66600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:20.977854   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.987991   66600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:20.988038   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.998188   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.008502   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.018540   66600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:21.028663   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.038758   66600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.056298   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.067136   66600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:21.076859   66600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:21.076906   66600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:21.090468   66600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:21.099951   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:21.226675   66600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:21.321993   66600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:21.322074   66600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:21.327981   66600 start.go:563] Will wait 60s for crictl version
	I1028 18:29:21.328028   66600 ssh_runner.go:195] Run: which crictl
	I1028 18:29:21.331673   66600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:21.369066   66600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:21.369168   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.396873   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.426233   66600 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:21.427570   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:21.430207   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430560   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:21.430582   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430732   66600 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:21.435293   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:21.447885   66600 kubeadm.go:883] updating cluster {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:21.447989   66600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:21.448067   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:21.488401   66600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:21.488488   66600 ssh_runner.go:195] Run: which lz4
	I1028 18:29:21.492578   66600 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:21.496531   66600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:21.496560   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:22.824198   66600 crio.go:462] duration metric: took 1.331643546s to copy over tarball
	I1028 18:29:22.824276   66600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:18.902233   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.902721   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.904121   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.354850   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.355961   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:24.854445   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:21.447903   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.948305   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.448529   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.947708   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.447881   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.947572   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.448433   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.948299   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.447748   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.947863   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.906928   66600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082617931s)
	I1028 18:29:24.906959   66600 crio.go:469] duration metric: took 2.082732511s to extract the tarball
	I1028 18:29:24.906968   66600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:24.944094   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:24.991024   66600 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:24.991048   66600 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:24.991057   66600 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.31.2 crio true true} ...
	I1028 18:29:24.991175   66600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-021370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:24.991262   66600 ssh_runner.go:195] Run: crio config
	I1028 18:29:25.034609   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:25.034629   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:25.034639   66600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:25.034657   66600 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-021370 NodeName:embed-certs-021370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:25.034803   66600 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-021370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:25.034858   66600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:25.044587   66600 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:25.044661   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:25.054150   66600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 18:29:25.070100   66600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:25.085866   66600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1028 18:29:25.101932   66600 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:25.105817   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:25.117399   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:25.235698   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:25.251517   66600 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370 for IP: 192.168.50.62
	I1028 18:29:25.251536   66600 certs.go:194] generating shared ca certs ...
	I1028 18:29:25.251549   66600 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:25.251701   66600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:25.251758   66600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:25.251771   66600 certs.go:256] generating profile certs ...
	I1028 18:29:25.251871   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/client.key
	I1028 18:29:25.251951   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key.1a2ee1e7
	I1028 18:29:25.252010   66600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key
	I1028 18:29:25.252184   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:25.252213   66600 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:25.252222   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:25.252246   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:25.252271   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:25.252291   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:25.252328   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:25.252968   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:25.280370   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:25.323757   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:25.356813   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:25.395729   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 18:29:25.428768   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:25.459929   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:25.485206   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:29:25.514312   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:25.537007   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:25.559926   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:25.582419   66600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:25.599284   66600 ssh_runner.go:195] Run: openssl version
	I1028 18:29:25.605132   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:25.615576   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619856   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619911   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.625516   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:25.636185   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:25.646664   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650958   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650998   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.657176   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:25.668490   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:25.679608   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.683993   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.684041   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.689729   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:25.700817   66600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:25.705214   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:25.711351   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:25.717172   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:25.722879   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:25.728415   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:25.733859   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:25.739422   66600 kubeadm.go:392] StartCluster: {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:25.739492   66600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:25.739534   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.779869   66600 cri.go:89] found id: ""
	I1028 18:29:25.779926   66600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:25.790753   66600 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:25.790771   66600 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:25.790811   66600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:25.800588   66600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:25.801624   66600 kubeconfig.go:125] found "embed-certs-021370" server: "https://192.168.50.62:8443"
	I1028 18:29:25.803466   66600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:25.813212   66600 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I1028 18:29:25.813240   66600 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:25.813254   66600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:25.813312   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.848911   66600 cri.go:89] found id: ""
	I1028 18:29:25.848976   66600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:25.866165   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:25.876454   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:25.876485   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:25.876539   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:29:25.886746   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:25.886802   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:25.897486   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:29:25.907828   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:25.907881   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:25.917520   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.926896   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:25.926950   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.937184   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:29:25.946539   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:25.946585   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:25.956520   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:25.968541   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:26.077716   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.298743   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.220990469s)
	I1028 18:29:27.298777   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.517286   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.582890   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.648091   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:27.648159   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.402969   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:27.405049   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.356621   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.356642   67489 pod_ready.go:82] duration metric: took 12.508989427s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.356653   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361609   67489 pod_ready.go:93] pod "kube-proxy-86rll" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.361627   67489 pod_ready.go:82] duration metric: took 4.968039ms for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361635   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365430   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.365449   67489 pod_ready.go:82] duration metric: took 3.807327ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365460   67489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:28.373442   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.448386   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.948082   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.447496   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.948285   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.947683   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.447813   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.947810   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.448413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.947477   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.148668   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.648320   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.148392   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.648218   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.682858   66600 api_server.go:72] duration metric: took 2.034774456s to wait for apiserver process to appear ...
	I1028 18:29:29.682888   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:29.682915   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:29.683457   66600 api_server.go:269] stopped: https://192.168.50.62:8443/healthz: Get "https://192.168.50.62:8443/healthz": dial tcp 192.168.50.62:8443: connect: connection refused
	I1028 18:29:30.182997   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.878280   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.878304   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:32.878318   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.942789   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.942828   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:29.903158   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:32.404024   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.183344   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.187337   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.187362   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:33.683288   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.687653   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.687680   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:34.183190   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:34.187671   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:29:34.195909   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:34.195938   66600 api_server.go:131] duration metric: took 4.51303648s to wait for apiserver health ...
	I1028 18:29:34.195950   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:34.195959   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:34.197469   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:30.872450   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.372710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:31.448099   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.948269   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.447660   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.947559   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.447716   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.948569   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.447555   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.947612   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.448411   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.947786   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.198803   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:34.221645   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:34.250694   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:34.261167   66600 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:34.261211   66600 system_pods.go:61] "coredns-7c65d6cfc9-bdtd8" [e1fff57c-ba57-4592-9049-7cc80a6f67a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:34.261229   66600 system_pods.go:61] "etcd-embed-certs-021370" [0c805e30-b6d8-416c-97af-c33b142b46e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:34.261240   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [244e08f7-7e8c-4547-b145-9816374fe582] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:34.261251   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [c08dc68e-d441-4d96-8377-957c381c4ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:34.261265   66600 system_pods.go:61] "kube-proxy-7g7lr" [828a4297-7703-46a7-bffe-c8daf83ef4bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:29:34.261277   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [2bc3fea6-0f01-43e9-b69e-deb26980e658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:34.261286   66600 system_pods.go:61] "metrics-server-6867b74b74-gg8bl" [599d8cf3-717d-46b2-a5ba-43e00f46829b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:34.261296   66600 system_pods.go:61] "storage-provisioner" [ad047e20-2de9-447c-83bc-8b835292a25f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:29:34.261307   66600 system_pods.go:74] duration metric: took 10.589505ms to wait for pod list to return data ...
	I1028 18:29:34.261319   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:34.265041   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:34.265066   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:34.265079   66600 node_conditions.go:105] duration metric: took 3.75485ms to run NodePressure ...
	I1028 18:29:34.265098   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:34.567509   66600 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571573   66600 kubeadm.go:739] kubelet initialised
	I1028 18:29:34.571592   66600 kubeadm.go:740] duration metric: took 4.056877ms waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571599   66600 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:34.576872   66600 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:36.586357   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:34.901383   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.902526   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:35.871154   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:37.873138   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.447566   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.947886   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.448276   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.948547   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.447546   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.947974   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.448334   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.948183   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.448396   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.947620   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.083269   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.083414   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:41.083443   66600 pod_ready.go:82] duration metric: took 6.506548177s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:41.083453   66600 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:39.401480   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.402426   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:40.370529   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:42.371580   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:44.372259   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.448306   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.947486   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.448219   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.948295   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.447765   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.947468   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.448454   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.947488   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.447568   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.948070   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.089927   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.589484   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.594775   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:43.403246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.403595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.902160   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.872441   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.371650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.448123   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.948178   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.447989   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.947888   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.448230   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.947692   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.448090   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.947996   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.447949   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.947977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.089584   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.089607   66600 pod_ready.go:82] duration metric: took 7.006147079s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.089619   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093940   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.093959   66600 pod_ready.go:82] duration metric: took 4.332474ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093969   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098279   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.098295   66600 pod_ready.go:82] duration metric: took 4.319206ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098304   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102326   66600 pod_ready.go:93] pod "kube-proxy-7g7lr" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.102341   66600 pod_ready.go:82] duration metric: took 4.03162ms for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102349   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106249   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.106265   66600 pod_ready.go:82] duration metric: took 3.910208ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106279   66600 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:50.112678   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:52.113794   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.902296   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.902424   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.371741   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:53.371833   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.448130   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.948450   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:51.948545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:51.987428   67149 cri.go:89] found id: ""
	I1028 18:29:51.987459   67149 logs.go:282] 0 containers: []
	W1028 18:29:51.987470   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:51.987478   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:51.987534   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:52.021429   67149 cri.go:89] found id: ""
	I1028 18:29:52.021452   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.021460   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:52.021466   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:52.021509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:52.055338   67149 cri.go:89] found id: ""
	I1028 18:29:52.055362   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.055373   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:52.055380   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:52.055432   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:52.088673   67149 cri.go:89] found id: ""
	I1028 18:29:52.088697   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.088705   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:52.088711   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:52.088766   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:52.129833   67149 cri.go:89] found id: ""
	I1028 18:29:52.129854   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.129862   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:52.129867   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:52.129918   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:52.162994   67149 cri.go:89] found id: ""
	I1028 18:29:52.163029   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.163040   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:52.163047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:52.163105   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:52.196819   67149 cri.go:89] found id: ""
	I1028 18:29:52.196840   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.196848   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:52.196853   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:52.196906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:52.232924   67149 cri.go:89] found id: ""
	I1028 18:29:52.232955   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.232965   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:52.232977   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:52.232992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:52.283317   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:52.283353   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:52.296648   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:52.296673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:52.423396   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:52.423418   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:52.423429   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:52.497671   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:52.497704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:55.037920   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:55.052539   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:55.052602   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:55.089302   67149 cri.go:89] found id: ""
	I1028 18:29:55.089332   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.089343   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:55.089351   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:55.089404   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:55.127317   67149 cri.go:89] found id: ""
	I1028 18:29:55.127345   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.127352   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:55.127358   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:55.127413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:55.161689   67149 cri.go:89] found id: ""
	I1028 18:29:55.161714   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.161721   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:55.161727   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:55.161772   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:55.196494   67149 cri.go:89] found id: ""
	I1028 18:29:55.196521   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.196534   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:55.196542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:55.196596   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:55.234980   67149 cri.go:89] found id: ""
	I1028 18:29:55.235008   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.235020   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:55.235028   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:55.235086   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:55.274750   67149 cri.go:89] found id: ""
	I1028 18:29:55.274775   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.274783   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:55.274789   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:55.274842   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:55.309839   67149 cri.go:89] found id: ""
	I1028 18:29:55.309865   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.309874   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:55.309881   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:55.309943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:55.358765   67149 cri.go:89] found id: ""
	I1028 18:29:55.358793   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.358805   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:55.358816   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:55.358830   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:55.422821   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:55.422869   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:55.439458   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:55.439482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:55.507743   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:55.507764   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:55.507775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:55.582679   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:55.582710   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:54.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.612967   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:54.402722   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.902816   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:55.372539   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:57.871444   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:58.124907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:58.139125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:58.139181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:58.178829   67149 cri.go:89] found id: ""
	I1028 18:29:58.178853   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.178864   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:58.178871   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:58.178933   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:58.212290   67149 cri.go:89] found id: ""
	I1028 18:29:58.212320   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.212336   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:58.212344   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:58.212402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:58.246108   67149 cri.go:89] found id: ""
	I1028 18:29:58.246135   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.246145   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:58.246152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:58.246212   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:58.280625   67149 cri.go:89] found id: ""
	I1028 18:29:58.280651   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.280662   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:58.280670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:58.280727   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:58.318755   67149 cri.go:89] found id: ""
	I1028 18:29:58.318783   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.318793   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:58.318801   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:58.318853   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:58.356452   67149 cri.go:89] found id: ""
	I1028 18:29:58.356487   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.356499   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:58.356506   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:58.356564   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:58.389906   67149 cri.go:89] found id: ""
	I1028 18:29:58.389928   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.389936   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:58.389943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:58.390001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:58.425883   67149 cri.go:89] found id: ""
	I1028 18:29:58.425911   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.425920   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:58.425929   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:58.425943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:58.484392   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:58.484433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:58.498133   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:58.498159   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:58.572358   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:58.572382   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:58.572397   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:58.654963   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:58.654997   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:58.613408   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.614235   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:59.402355   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.403000   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.370479   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:02.370951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:04.372159   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.196593   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:01.209622   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:01.209693   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:01.243682   67149 cri.go:89] found id: ""
	I1028 18:30:01.243708   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.243718   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:01.243726   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:01.243786   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:01.277617   67149 cri.go:89] found id: ""
	I1028 18:30:01.277646   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.277654   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:01.277660   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:01.277710   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:01.314028   67149 cri.go:89] found id: ""
	I1028 18:30:01.314055   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.314067   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:01.314081   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:01.314152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:01.350324   67149 cri.go:89] found id: ""
	I1028 18:30:01.350348   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.350356   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:01.350362   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:01.350415   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:01.385802   67149 cri.go:89] found id: ""
	I1028 18:30:01.385826   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.385834   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:01.385840   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:01.385883   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:01.421507   67149 cri.go:89] found id: ""
	I1028 18:30:01.421534   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.421545   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:01.421553   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:01.421611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:01.457285   67149 cri.go:89] found id: ""
	I1028 18:30:01.457314   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.457326   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:01.457333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:01.457380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:01.490962   67149 cri.go:89] found id: ""
	I1028 18:30:01.490984   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.490992   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:01.491000   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:01.491012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:01.559906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:01.559937   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:01.559962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:01.639455   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:01.639485   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.681968   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:01.681994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:01.736639   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:01.736672   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.251876   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:04.265639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:04.265711   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:04.300133   67149 cri.go:89] found id: ""
	I1028 18:30:04.300159   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.300167   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:04.300173   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:04.300228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:04.335723   67149 cri.go:89] found id: ""
	I1028 18:30:04.335749   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.335760   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:04.335767   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:04.335825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:04.373009   67149 cri.go:89] found id: ""
	I1028 18:30:04.373030   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.373040   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:04.373048   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:04.373113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:04.405969   67149 cri.go:89] found id: ""
	I1028 18:30:04.405993   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.406003   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:04.406011   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:04.406066   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:04.441067   67149 cri.go:89] found id: ""
	I1028 18:30:04.441095   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.441106   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:04.441112   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:04.441176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:04.475231   67149 cri.go:89] found id: ""
	I1028 18:30:04.475260   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.475270   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:04.475277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:04.475342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:04.512970   67149 cri.go:89] found id: ""
	I1028 18:30:04.512998   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.513009   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:04.513017   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:04.513078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:04.547857   67149 cri.go:89] found id: ""
	I1028 18:30:04.547880   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.547890   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:04.547901   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:04.547913   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:04.598870   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:04.598900   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.612678   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:04.612705   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:04.686945   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:04.686967   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:04.686979   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:04.764943   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:04.764992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:03.113309   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.113449   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.613568   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:03.902735   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.903116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:06.872012   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:09.371576   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.310905   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:07.323880   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:07.323946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:07.363597   67149 cri.go:89] found id: ""
	I1028 18:30:07.363626   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.363637   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:07.363645   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:07.363706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:07.401051   67149 cri.go:89] found id: ""
	I1028 18:30:07.401073   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.401082   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:07.401089   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:07.401147   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:07.439710   67149 cri.go:89] found id: ""
	I1028 18:30:07.439735   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.439743   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:07.439748   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:07.439796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:07.476627   67149 cri.go:89] found id: ""
	I1028 18:30:07.476653   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.476663   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:07.476670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:07.476747   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:07.508770   67149 cri.go:89] found id: ""
	I1028 18:30:07.508796   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.508807   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:07.508814   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:07.508874   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:07.543467   67149 cri.go:89] found id: ""
	I1028 18:30:07.543496   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.543506   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:07.543514   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:07.543575   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:07.577181   67149 cri.go:89] found id: ""
	I1028 18:30:07.577204   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.577212   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:07.577217   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:07.577266   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:07.611862   67149 cri.go:89] found id: ""
	I1028 18:30:07.611886   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.611896   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:07.611906   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:07.611924   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:07.699794   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:07.699833   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.747920   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:07.747948   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:07.797402   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:07.797434   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:07.811752   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:07.811778   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:07.881604   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.382191   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:10.394572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:10.394624   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:10.428941   67149 cri.go:89] found id: ""
	I1028 18:30:10.428973   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.428984   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:10.429004   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:10.429071   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:10.462526   67149 cri.go:89] found id: ""
	I1028 18:30:10.462558   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.462569   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:10.462578   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:10.462641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:10.498472   67149 cri.go:89] found id: ""
	I1028 18:30:10.498495   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.498503   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:10.498509   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:10.498557   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:10.535400   67149 cri.go:89] found id: ""
	I1028 18:30:10.535422   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.535430   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:10.535436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:10.535483   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:10.568961   67149 cri.go:89] found id: ""
	I1028 18:30:10.568981   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.568988   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:10.568994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:10.569041   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:10.601273   67149 cri.go:89] found id: ""
	I1028 18:30:10.601306   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.601318   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:10.601325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:10.601383   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:10.638093   67149 cri.go:89] found id: ""
	I1028 18:30:10.638124   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.638135   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:10.638141   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:10.638203   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:10.674624   67149 cri.go:89] found id: ""
	I1028 18:30:10.674654   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.674665   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:10.674675   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:10.674688   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:10.714568   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:10.714602   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:10.764732   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:10.764765   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:10.778111   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:10.778139   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:10.854488   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.854516   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:10.854531   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:10.113469   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.614268   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:08.401958   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:10.402159   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.402379   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:11.872789   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.372947   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:13.438803   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:13.452322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:13.452397   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:13.487337   67149 cri.go:89] found id: ""
	I1028 18:30:13.487360   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.487369   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:13.487381   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:13.487488   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:13.521992   67149 cri.go:89] found id: ""
	I1028 18:30:13.522024   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.522034   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:13.522041   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:13.522099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:13.555315   67149 cri.go:89] found id: ""
	I1028 18:30:13.555347   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.555363   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:13.555371   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:13.555431   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:13.589401   67149 cri.go:89] found id: ""
	I1028 18:30:13.589425   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.589436   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:13.589445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:13.589493   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:13.629340   67149 cri.go:89] found id: ""
	I1028 18:30:13.629370   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.629385   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:13.629393   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:13.629454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:13.667307   67149 cri.go:89] found id: ""
	I1028 18:30:13.667337   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.667348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:13.667355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:13.667418   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:13.701457   67149 cri.go:89] found id: ""
	I1028 18:30:13.701513   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.701526   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:13.701536   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:13.701594   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:13.737989   67149 cri.go:89] found id: ""
	I1028 18:30:13.738023   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.738033   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:13.738043   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:13.738056   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:13.791743   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:13.791777   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:13.805501   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:13.805529   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:13.882239   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:13.882262   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:13.882276   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.963480   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:13.963516   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:15.112587   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:17.113242   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.901879   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.902869   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.871650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:18.872448   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.502799   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:16.516397   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:16.516456   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:16.551670   67149 cri.go:89] found id: ""
	I1028 18:30:16.551701   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.551712   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:16.551719   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:16.551771   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:16.584390   67149 cri.go:89] found id: ""
	I1028 18:30:16.584417   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.584428   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:16.584435   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:16.584510   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:16.620868   67149 cri.go:89] found id: ""
	I1028 18:30:16.620892   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.620899   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:16.620904   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:16.620949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:16.654189   67149 cri.go:89] found id: ""
	I1028 18:30:16.654216   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.654225   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:16.654231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:16.654284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:16.694526   67149 cri.go:89] found id: ""
	I1028 18:30:16.694557   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.694568   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:16.694575   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:16.694640   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:16.728857   67149 cri.go:89] found id: ""
	I1028 18:30:16.728884   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.728892   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:16.728898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:16.728948   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:16.763198   67149 cri.go:89] found id: ""
	I1028 18:30:16.763220   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.763227   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:16.763232   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:16.763282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:16.800120   67149 cri.go:89] found id: ""
	I1028 18:30:16.800142   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.800149   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:16.800157   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:16.800167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:16.852710   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:16.852736   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:16.867365   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:16.867395   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:16.945605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:16.945627   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:16.945643   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:17.022838   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:17.022871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.563585   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:19.577612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:19.577683   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:19.615797   67149 cri.go:89] found id: ""
	I1028 18:30:19.615820   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.615829   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:19.615836   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:19.615882   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:19.654780   67149 cri.go:89] found id: ""
	I1028 18:30:19.654802   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.654810   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:19.654816   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:19.654873   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:19.693502   67149 cri.go:89] found id: ""
	I1028 18:30:19.693532   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.693542   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:19.693550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:19.693611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:19.731869   67149 cri.go:89] found id: ""
	I1028 18:30:19.731902   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.731910   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:19.731916   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:19.731974   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:19.765046   67149 cri.go:89] found id: ""
	I1028 18:30:19.765081   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.765092   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:19.765099   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:19.765158   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:19.798082   67149 cri.go:89] found id: ""
	I1028 18:30:19.798105   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.798113   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:19.798119   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:19.798172   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:19.832562   67149 cri.go:89] found id: ""
	I1028 18:30:19.832590   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.832601   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:19.832608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:19.832676   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:19.867213   67149 cri.go:89] found id: ""
	I1028 18:30:19.867240   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.867251   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:19.867260   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:19.867277   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:19.942276   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:19.942304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.977642   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:19.977671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:20.027077   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:20.027109   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:20.040159   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:20.040181   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:20.113350   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:19.113850   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.613505   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:19.402671   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.902317   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.372438   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.872137   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:22.614379   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:22.628550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:22.628607   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:22.662647   67149 cri.go:89] found id: ""
	I1028 18:30:22.662670   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.662677   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:22.662683   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:22.662732   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:22.696697   67149 cri.go:89] found id: ""
	I1028 18:30:22.696736   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.696747   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:22.696753   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:22.696815   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:22.730011   67149 cri.go:89] found id: ""
	I1028 18:30:22.730039   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.730049   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:22.730056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:22.730114   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:22.766604   67149 cri.go:89] found id: ""
	I1028 18:30:22.766629   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.766639   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:22.766647   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:22.766703   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:22.800581   67149 cri.go:89] found id: ""
	I1028 18:30:22.800608   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.800617   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:22.800625   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:22.800692   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:22.832742   67149 cri.go:89] found id: ""
	I1028 18:30:22.832767   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.832775   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:22.832780   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:22.832823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:22.865850   67149 cri.go:89] found id: ""
	I1028 18:30:22.865876   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.865885   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:22.865892   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:22.865949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:22.904410   67149 cri.go:89] found id: ""
	I1028 18:30:22.904433   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.904443   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:22.904454   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:22.904482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:22.959275   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:22.959310   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:22.972630   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:22.972652   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:23.043851   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:23.043873   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:23.043886   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:23.121657   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:23.121686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:25.662109   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:25.676366   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:25.676443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:25.715192   67149 cri.go:89] found id: ""
	I1028 18:30:25.715216   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.715224   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:25.715230   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:25.715283   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:25.754736   67149 cri.go:89] found id: ""
	I1028 18:30:25.754765   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.754773   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:25.754779   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:25.754823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:25.794179   67149 cri.go:89] found id: ""
	I1028 18:30:25.794207   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.794216   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:25.794224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:25.794278   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:25.833206   67149 cri.go:89] found id: ""
	I1028 18:30:25.833238   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.833246   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:25.833252   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:25.833298   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:25.871628   67149 cri.go:89] found id: ""
	I1028 18:30:25.871659   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.871669   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:25.871677   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:25.871735   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:25.910900   67149 cri.go:89] found id: ""
	I1028 18:30:25.910924   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.910934   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:25.910942   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:25.911001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:25.943972   67149 cri.go:89] found id: ""
	I1028 18:30:25.943992   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.943999   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:25.944004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:25.944059   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:25.982521   67149 cri.go:89] found id: ""
	I1028 18:30:25.982544   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.982551   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:25.982559   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:25.982569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:26.033003   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:26.033031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:26.046480   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:26.046503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 18:30:24.112244   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.113815   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.902652   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.402135   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:25.873075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.372129   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	W1028 18:30:26.117194   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:26.117213   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:26.117230   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:26.195399   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:26.195430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:28.737237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:28.751846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:28.751910   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:28.794259   67149 cri.go:89] found id: ""
	I1028 18:30:28.794290   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.794301   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:28.794308   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:28.794374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:28.827573   67149 cri.go:89] found id: ""
	I1028 18:30:28.827603   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.827611   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:28.827616   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:28.827671   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:28.860676   67149 cri.go:89] found id: ""
	I1028 18:30:28.860702   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.860713   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:28.860721   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:28.860780   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:28.897302   67149 cri.go:89] found id: ""
	I1028 18:30:28.897327   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.897343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:28.897351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:28.897410   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:28.933425   67149 cri.go:89] found id: ""
	I1028 18:30:28.933454   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.933464   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:28.933471   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:28.933535   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:28.966004   67149 cri.go:89] found id: ""
	I1028 18:30:28.966032   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.966043   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:28.966051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:28.966107   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:29.002788   67149 cri.go:89] found id: ""
	I1028 18:30:29.002818   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.002829   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:29.002835   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:29.002894   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:29.033351   67149 cri.go:89] found id: ""
	I1028 18:30:29.033379   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.033389   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:29.033400   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:29.033420   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:29.107997   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:29.108025   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:29.144727   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:29.144753   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:29.206487   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:29.206521   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:29.219722   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:29.219744   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:29.288254   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:28.612485   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.113113   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.902960   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.871338   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.372081   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.789035   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:31.802587   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:31.802650   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:31.838372   67149 cri.go:89] found id: ""
	I1028 18:30:31.838401   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.838410   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:31.838416   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:31.838469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:31.877794   67149 cri.go:89] found id: ""
	I1028 18:30:31.877822   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.877833   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:31.877840   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:31.877896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:31.917442   67149 cri.go:89] found id: ""
	I1028 18:30:31.917472   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.917483   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:31.917490   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:31.917549   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:31.951900   67149 cri.go:89] found id: ""
	I1028 18:30:31.951931   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.951943   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:31.951951   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:31.952008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:31.988011   67149 cri.go:89] found id: ""
	I1028 18:30:31.988040   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.988051   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:31.988058   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:31.988116   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:32.021042   67149 cri.go:89] found id: ""
	I1028 18:30:32.021063   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.021071   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:32.021077   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:32.021124   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:32.053748   67149 cri.go:89] found id: ""
	I1028 18:30:32.053770   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.053778   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:32.053783   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:32.053837   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:32.089725   67149 cri.go:89] found id: ""
	I1028 18:30:32.089756   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.089766   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:32.089777   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:32.089790   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:32.140000   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:32.140031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:32.154023   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:32.154046   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:32.231222   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:32.231242   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:32.231255   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:32.311354   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:32.311388   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:34.852507   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:34.867133   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:34.867198   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:34.901201   67149 cri.go:89] found id: ""
	I1028 18:30:34.901228   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.901238   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:34.901245   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:34.901300   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:34.962788   67149 cri.go:89] found id: ""
	I1028 18:30:34.962814   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.962824   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:34.962835   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:34.962896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:34.996879   67149 cri.go:89] found id: ""
	I1028 18:30:34.996906   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.996917   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:34.996926   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:34.996986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:35.033516   67149 cri.go:89] found id: ""
	I1028 18:30:35.033541   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.033553   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:35.033560   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:35.033622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:35.066903   67149 cri.go:89] found id: ""
	I1028 18:30:35.066933   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.066945   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:35.066953   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:35.067010   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:35.099675   67149 cri.go:89] found id: ""
	I1028 18:30:35.099697   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.099704   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:35.099710   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:35.099755   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:35.133595   67149 cri.go:89] found id: ""
	I1028 18:30:35.133623   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.133633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:35.133641   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:35.133699   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:35.172236   67149 cri.go:89] found id: ""
	I1028 18:30:35.172262   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.172272   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:35.172282   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:35.172296   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:35.224952   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:35.224981   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:35.238554   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:35.238578   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:35.318991   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:35.319024   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:35.319040   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:35.399763   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:35.399799   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:33.612446   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.613847   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.402375   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.402653   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.902346   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:38.372413   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.947847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:37.963147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:37.963210   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.001768   67149 cri.go:89] found id: ""
	I1028 18:30:38.001792   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.001802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:38.001809   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:38.001868   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:38.042877   67149 cri.go:89] found id: ""
	I1028 18:30:38.042905   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.042916   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:38.042924   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:38.042986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:38.078116   67149 cri.go:89] found id: ""
	I1028 18:30:38.078143   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.078154   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:38.078162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:38.078226   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:38.111082   67149 cri.go:89] found id: ""
	I1028 18:30:38.111108   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.111119   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:38.111127   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:38.111187   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:38.144863   67149 cri.go:89] found id: ""
	I1028 18:30:38.144889   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.144898   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:38.144906   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:38.144962   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:38.178671   67149 cri.go:89] found id: ""
	I1028 18:30:38.178701   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.178712   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:38.178719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:38.178774   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:38.218441   67149 cri.go:89] found id: ""
	I1028 18:30:38.218464   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.218472   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:38.218477   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:38.218528   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:38.252697   67149 cri.go:89] found id: ""
	I1028 18:30:38.252719   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.252727   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:38.252736   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:38.252745   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:38.304813   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:38.304853   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:38.318437   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:38.318462   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:38.389959   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:38.389987   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:38.390002   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:38.471462   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:38.471495   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:41.013647   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:41.027167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:41.027233   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.113426   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:39.903261   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.402381   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.871193   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.873502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:41.062559   67149 cri.go:89] found id: ""
	I1028 18:30:41.062590   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.062601   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:41.062609   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:41.062667   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:41.097732   67149 cri.go:89] found id: ""
	I1028 18:30:41.097758   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.097767   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:41.097773   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:41.097819   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:41.133067   67149 cri.go:89] found id: ""
	I1028 18:30:41.133089   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.133097   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:41.133102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:41.133150   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:41.168640   67149 cri.go:89] found id: ""
	I1028 18:30:41.168674   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.168684   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:41.168691   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:41.168754   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:41.206429   67149 cri.go:89] found id: ""
	I1028 18:30:41.206453   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.206463   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:41.206470   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:41.206527   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:41.248326   67149 cri.go:89] found id: ""
	I1028 18:30:41.248350   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.248360   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:41.248369   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:41.248429   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:41.283703   67149 cri.go:89] found id: ""
	I1028 18:30:41.283734   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.283746   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:41.283753   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:41.283810   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:41.327759   67149 cri.go:89] found id: ""
	I1028 18:30:41.327786   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.327796   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:41.327807   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:41.327820   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:41.388563   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:41.388593   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:41.406411   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:41.406435   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:41.490605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:41.490626   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:41.490637   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:41.569386   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:41.569433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.109394   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:44.123047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:44.123113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:44.156762   67149 cri.go:89] found id: ""
	I1028 18:30:44.156792   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.156802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:44.156810   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:44.156867   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:44.192244   67149 cri.go:89] found id: ""
	I1028 18:30:44.192271   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.192282   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:44.192289   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:44.192357   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:44.224059   67149 cri.go:89] found id: ""
	I1028 18:30:44.224094   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.224101   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:44.224115   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:44.224168   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:44.258750   67149 cri.go:89] found id: ""
	I1028 18:30:44.258779   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.258789   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:44.258797   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:44.258854   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:44.295600   67149 cri.go:89] found id: ""
	I1028 18:30:44.295624   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.295632   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:44.295638   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:44.295684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:44.327278   67149 cri.go:89] found id: ""
	I1028 18:30:44.327302   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.327309   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:44.327315   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:44.327370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:44.360734   67149 cri.go:89] found id: ""
	I1028 18:30:44.360760   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.360768   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:44.360774   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:44.360822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:44.398198   67149 cri.go:89] found id: ""
	I1028 18:30:44.398224   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.398234   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:44.398249   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:44.398261   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:44.476135   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:44.476167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.514073   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:44.514105   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:44.563001   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:44.563033   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:44.576882   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:44.576912   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:44.648532   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:43.112043   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.113135   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.113382   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:44.403147   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:46.902890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.370854   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.371758   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.373946   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.149133   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:47.165612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:47.165696   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:47.203960   67149 cri.go:89] found id: ""
	I1028 18:30:47.203987   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.203996   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:47.204002   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:47.204065   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:47.236731   67149 cri.go:89] found id: ""
	I1028 18:30:47.236757   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.236766   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:47.236774   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:47.236828   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:47.273779   67149 cri.go:89] found id: ""
	I1028 18:30:47.273808   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.273820   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:47.273826   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:47.273878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:47.309996   67149 cri.go:89] found id: ""
	I1028 18:30:47.310020   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.310028   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:47.310034   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:47.310108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:47.352904   67149 cri.go:89] found id: ""
	I1028 18:30:47.352925   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.352934   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:47.352939   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:47.352990   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:47.389641   67149 cri.go:89] found id: ""
	I1028 18:30:47.389660   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.389667   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:47.389672   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:47.389718   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:47.422591   67149 cri.go:89] found id: ""
	I1028 18:30:47.422622   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.422632   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:47.422639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:47.422694   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:47.454849   67149 cri.go:89] found id: ""
	I1028 18:30:47.454876   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.454886   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:47.454895   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:47.454916   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:47.506176   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:47.506203   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:47.519084   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:47.519108   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:47.585660   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.585681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:47.585696   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:47.664904   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:47.664939   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:50.203775   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:50.216923   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:50.216992   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:50.252506   67149 cri.go:89] found id: ""
	I1028 18:30:50.252531   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.252541   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:50.252548   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:50.252608   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:50.288641   67149 cri.go:89] found id: ""
	I1028 18:30:50.288669   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.288678   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:50.288684   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:50.288739   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:50.322130   67149 cri.go:89] found id: ""
	I1028 18:30:50.322163   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.322174   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:50.322182   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:50.322240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:50.359508   67149 cri.go:89] found id: ""
	I1028 18:30:50.359536   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.359546   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:50.359554   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:50.359617   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:50.393571   67149 cri.go:89] found id: ""
	I1028 18:30:50.393607   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.393618   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:50.393626   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:50.393685   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:50.428683   67149 cri.go:89] found id: ""
	I1028 18:30:50.428705   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.428713   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:50.428719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:50.428767   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:50.464086   67149 cri.go:89] found id: ""
	I1028 18:30:50.464111   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.464119   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:50.464125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:50.464183   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:50.496695   67149 cri.go:89] found id: ""
	I1028 18:30:50.496726   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.496736   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:50.496745   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:50.496755   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:50.545495   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:50.545526   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:50.558819   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:50.558852   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:50.636344   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:50.636369   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:50.636384   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:50.720270   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:50.720304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:49.612927   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.613353   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.402779   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.901517   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.873490   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:54.372373   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.261194   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:53.274451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:53.274507   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:53.306258   67149 cri.go:89] found id: ""
	I1028 18:30:53.306286   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.306295   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:53.306301   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:53.306362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:53.340222   67149 cri.go:89] found id: ""
	I1028 18:30:53.340244   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.340253   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:53.340258   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:53.340322   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:53.377726   67149 cri.go:89] found id: ""
	I1028 18:30:53.377750   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.377760   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:53.377767   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:53.377820   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:53.414228   67149 cri.go:89] found id: ""
	I1028 18:30:53.414252   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.414262   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:53.414275   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:53.414332   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:53.449152   67149 cri.go:89] found id: ""
	I1028 18:30:53.449179   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.449186   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:53.449192   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:53.449237   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:53.485678   67149 cri.go:89] found id: ""
	I1028 18:30:53.485705   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.485716   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:53.485723   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:53.485784   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:53.520764   67149 cri.go:89] found id: ""
	I1028 18:30:53.520791   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.520802   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:53.520810   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:53.520870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:53.561153   67149 cri.go:89] found id: ""
	I1028 18:30:53.561176   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.561184   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:53.561192   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:53.561202   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:53.642192   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:53.642242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.686527   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:53.686567   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:53.740815   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:53.740849   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:53.754577   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:53.754604   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:53.823717   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:54.112985   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.612820   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.903128   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:55.903482   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.372798   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.871814   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.324847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:56.338572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:56.338628   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:56.375482   67149 cri.go:89] found id: ""
	I1028 18:30:56.375506   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.375517   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:56.375524   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:56.375580   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:56.407894   67149 cri.go:89] found id: ""
	I1028 18:30:56.407921   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.407931   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:56.407938   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:56.407993   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:56.447006   67149 cri.go:89] found id: ""
	I1028 18:30:56.447037   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.447048   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:56.447055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:56.447112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:56.483850   67149 cri.go:89] found id: ""
	I1028 18:30:56.483880   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.483890   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:56.483898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:56.483958   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:56.520008   67149 cri.go:89] found id: ""
	I1028 18:30:56.520038   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.520045   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:56.520051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:56.520099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:56.552567   67149 cri.go:89] found id: ""
	I1028 18:30:56.552592   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.552600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:56.552608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:56.552658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:56.591277   67149 cri.go:89] found id: ""
	I1028 18:30:56.591297   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.591305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:56.591311   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:56.591362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:56.632164   67149 cri.go:89] found id: ""
	I1028 18:30:56.632186   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.632194   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:56.632202   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:56.632219   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:56.683590   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:56.683623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:56.698509   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:56.698539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:56.777141   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.777171   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:56.777188   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:56.851801   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:56.851842   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.394266   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:59.408460   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:59.408545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:59.444066   67149 cri.go:89] found id: ""
	I1028 18:30:59.444092   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.444104   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:59.444112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:59.444165   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:59.479531   67149 cri.go:89] found id: ""
	I1028 18:30:59.479557   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.479568   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:59.479576   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:59.479622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:59.519467   67149 cri.go:89] found id: ""
	I1028 18:30:59.519489   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.519496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:59.519502   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:59.519546   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:59.551108   67149 cri.go:89] found id: ""
	I1028 18:30:59.551131   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.551140   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:59.551146   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:59.551197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:59.585875   67149 cri.go:89] found id: ""
	I1028 18:30:59.585899   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.585907   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:59.585912   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:59.585968   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:59.620571   67149 cri.go:89] found id: ""
	I1028 18:30:59.620595   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.620602   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:59.620608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:59.620655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:59.653927   67149 cri.go:89] found id: ""
	I1028 18:30:59.653954   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.653965   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:59.653972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:59.654039   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:59.689138   67149 cri.go:89] found id: ""
	I1028 18:30:59.689160   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.689168   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:59.689175   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:59.689185   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:59.768231   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:59.768270   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.811980   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:59.812007   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:59.864509   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:59.864543   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:59.879329   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:59.879354   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:59.950134   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:59.112280   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:01.113341   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.402845   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.902628   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.904642   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.872873   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:03.371672   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.450237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:02.464689   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:02.464765   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:02.500938   67149 cri.go:89] found id: ""
	I1028 18:31:02.500964   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.500975   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:02.500982   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:02.501043   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:02.534580   67149 cri.go:89] found id: ""
	I1028 18:31:02.534608   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.534620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:02.534628   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:02.534684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:02.570203   67149 cri.go:89] found id: ""
	I1028 18:31:02.570224   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.570231   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:02.570237   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:02.570284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:02.606037   67149 cri.go:89] found id: ""
	I1028 18:31:02.606064   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.606072   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:02.606082   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:02.606135   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:02.640622   67149 cri.go:89] found id: ""
	I1028 18:31:02.640646   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.640656   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:02.640663   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:02.640723   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:02.676406   67149 cri.go:89] found id: ""
	I1028 18:31:02.676434   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.676444   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:02.676451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:02.676520   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:02.710284   67149 cri.go:89] found id: ""
	I1028 18:31:02.710308   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.710316   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:02.710322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:02.710376   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:02.750853   67149 cri.go:89] found id: ""
	I1028 18:31:02.750899   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.750910   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:02.750918   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:02.750929   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:02.825886   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.825913   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:02.825927   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:02.904828   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:02.904857   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:02.941886   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:02.941922   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:02.991603   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:02.991632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.505655   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:05.520582   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:05.520638   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:05.558724   67149 cri.go:89] found id: ""
	I1028 18:31:05.558753   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.558763   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:05.558770   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:05.558816   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:05.597864   67149 cri.go:89] found id: ""
	I1028 18:31:05.597885   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.597893   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:05.597898   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:05.597956   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:05.643571   67149 cri.go:89] found id: ""
	I1028 18:31:05.643602   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.643613   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:05.643620   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:05.643679   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:05.682010   67149 cri.go:89] found id: ""
	I1028 18:31:05.682039   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.682048   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:05.682053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:05.682106   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:05.716043   67149 cri.go:89] found id: ""
	I1028 18:31:05.716067   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.716080   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:05.716086   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:05.716134   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:05.750962   67149 cri.go:89] found id: ""
	I1028 18:31:05.750995   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.751010   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:05.751016   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:05.751078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:05.785059   67149 cri.go:89] found id: ""
	I1028 18:31:05.785111   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.785124   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:05.785132   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:05.785193   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:05.833525   67149 cri.go:89] found id: ""
	I1028 18:31:05.833550   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.833559   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:05.833566   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:05.833579   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:05.887766   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:05.887796   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.902575   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:05.902606   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:05.975082   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:05.975108   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:05.975122   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:03.613265   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.114362   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.402167   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:07.402252   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.873147   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:08.370748   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.050369   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:06.050396   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.593506   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:08.606188   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:08.606251   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:08.645186   67149 cri.go:89] found id: ""
	I1028 18:31:08.645217   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.645227   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:08.645235   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:08.645294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:08.680728   67149 cri.go:89] found id: ""
	I1028 18:31:08.680759   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.680771   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:08.680778   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:08.680833   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:08.714733   67149 cri.go:89] found id: ""
	I1028 18:31:08.714760   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.714772   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:08.714779   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:08.714844   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:08.750293   67149 cri.go:89] found id: ""
	I1028 18:31:08.750323   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.750333   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:08.750339   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:08.750390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:08.784521   67149 cri.go:89] found id: ""
	I1028 18:31:08.784548   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.784559   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:08.784566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:08.784629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:08.818808   67149 cri.go:89] found id: ""
	I1028 18:31:08.818838   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.818849   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:08.818857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:08.818920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:08.855575   67149 cri.go:89] found id: ""
	I1028 18:31:08.855608   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.855619   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:08.855633   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:08.855690   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:08.892996   67149 cri.go:89] found id: ""
	I1028 18:31:08.893024   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.893035   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:08.893045   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:08.893064   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.937056   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:08.937084   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:08.989013   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:08.989048   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:09.002048   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:09.002077   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:09.075247   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:09.075277   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:09.075290   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:08.612396   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.612689   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:09.402595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.903403   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.371335   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:12.371435   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.371502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.654701   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:11.668066   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:11.668146   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:11.701666   67149 cri.go:89] found id: ""
	I1028 18:31:11.701693   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.701703   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:11.701710   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:11.701769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:11.738342   67149 cri.go:89] found id: ""
	I1028 18:31:11.738368   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.738376   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:11.738381   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:11.738428   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:11.772009   67149 cri.go:89] found id: ""
	I1028 18:31:11.772035   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.772045   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:11.772053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:11.772118   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:11.816210   67149 cri.go:89] found id: ""
	I1028 18:31:11.816237   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.816245   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:11.816251   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:11.816314   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:11.856675   67149 cri.go:89] found id: ""
	I1028 18:31:11.856704   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.856714   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:11.856722   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:11.856785   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:11.896566   67149 cri.go:89] found id: ""
	I1028 18:31:11.896592   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.896600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:11.896606   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:11.896665   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:11.932599   67149 cri.go:89] found id: ""
	I1028 18:31:11.932624   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.932633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:11.932640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:11.932704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:11.966952   67149 cri.go:89] found id: ""
	I1028 18:31:11.966982   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.967008   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:11.967019   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:11.967037   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:12.016465   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:12.016502   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:12.029314   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:12.029343   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:12.098906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:12.098936   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:12.098954   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:12.176440   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:12.176489   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:14.720173   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:14.733796   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:14.733848   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:14.774072   67149 cri.go:89] found id: ""
	I1028 18:31:14.774093   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.774100   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:14.774106   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:14.774152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:14.816116   67149 cri.go:89] found id: ""
	I1028 18:31:14.816145   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.816158   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:14.816166   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:14.816224   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:14.851167   67149 cri.go:89] found id: ""
	I1028 18:31:14.851189   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.851196   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:14.851202   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:14.851247   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:14.885887   67149 cri.go:89] found id: ""
	I1028 18:31:14.885918   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.885926   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:14.885931   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:14.885976   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:14.923787   67149 cri.go:89] found id: ""
	I1028 18:31:14.923815   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.923826   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:14.923833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:14.923892   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:14.960117   67149 cri.go:89] found id: ""
	I1028 18:31:14.960148   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.960160   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:14.960167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:14.960240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:14.998418   67149 cri.go:89] found id: ""
	I1028 18:31:14.998458   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.998470   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:14.998485   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:14.998545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:15.031985   67149 cri.go:89] found id: ""
	I1028 18:31:15.032005   67149 logs.go:282] 0 containers: []
	W1028 18:31:15.032014   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:15.032027   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:15.032038   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:15.045239   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:15.045264   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:15.118954   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:15.118978   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:15.118994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:15.200538   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:15.200569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:15.243581   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:15.243603   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:13.112232   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:15.113498   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.612946   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.401769   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.402729   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.871916   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.872378   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.794670   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:17.808325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:17.808380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:17.841888   67149 cri.go:89] found id: ""
	I1028 18:31:17.841911   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.841919   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:17.841925   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:17.841979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:17.881241   67149 cri.go:89] found id: ""
	I1028 18:31:17.881261   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.881269   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:17.881274   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:17.881331   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:17.922394   67149 cri.go:89] found id: ""
	I1028 18:31:17.922419   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.922428   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:17.922434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:17.922498   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:17.963519   67149 cri.go:89] found id: ""
	I1028 18:31:17.963546   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.963558   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:17.963566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:17.963641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:18.003181   67149 cri.go:89] found id: ""
	I1028 18:31:18.003202   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.003209   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:18.003214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:18.003261   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:18.040305   67149 cri.go:89] found id: ""
	I1028 18:31:18.040338   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.040348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:18.040356   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:18.040413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:18.077671   67149 cri.go:89] found id: ""
	I1028 18:31:18.077696   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.077708   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:18.077715   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:18.077777   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:18.116155   67149 cri.go:89] found id: ""
	I1028 18:31:18.116176   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.116182   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:18.116190   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:18.116201   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:18.168343   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:18.168370   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:18.181962   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:18.181988   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:18.260227   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:18.260251   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:18.260265   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:18.346588   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:18.346620   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:20.885832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:20.899053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:20.899121   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:20.954770   67149 cri.go:89] found id: ""
	I1028 18:31:20.954797   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.954806   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:20.954812   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:20.954870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:20.989809   67149 cri.go:89] found id: ""
	I1028 18:31:20.989834   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.989842   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:20.989848   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:20.989900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:21.027150   67149 cri.go:89] found id: ""
	I1028 18:31:21.027179   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.027191   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:21.027199   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:21.027259   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:20.113283   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:22.612710   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.902738   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.403607   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.371574   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.871000   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.061235   67149 cri.go:89] found id: ""
	I1028 18:31:21.061260   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.061270   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:21.061277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:21.061337   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:21.095451   67149 cri.go:89] found id: ""
	I1028 18:31:21.095473   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.095481   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:21.095487   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:21.095540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:21.135576   67149 cri.go:89] found id: ""
	I1028 18:31:21.135598   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.135606   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:21.135612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:21.135660   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:21.170816   67149 cri.go:89] found id: ""
	I1028 18:31:21.170845   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.170854   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:21.170860   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:21.170920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:21.204616   67149 cri.go:89] found id: ""
	I1028 18:31:21.204649   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.204660   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:21.204672   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:21.204686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:21.254523   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:21.254556   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:21.267981   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:21.268005   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:21.336786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:21.336813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:21.336828   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:21.420596   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:21.420625   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:23.962346   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:23.976628   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:23.976697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:24.016418   67149 cri.go:89] found id: ""
	I1028 18:31:24.016444   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.016453   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:24.016458   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:24.016533   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:24.051448   67149 cri.go:89] found id: ""
	I1028 18:31:24.051474   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.051483   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:24.051488   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:24.051554   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:24.090787   67149 cri.go:89] found id: ""
	I1028 18:31:24.090816   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.090829   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:24.090836   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:24.090900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:24.126315   67149 cri.go:89] found id: ""
	I1028 18:31:24.126342   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.126349   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:24.126355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:24.126402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:24.161340   67149 cri.go:89] found id: ""
	I1028 18:31:24.161367   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.161379   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:24.161387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:24.161445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:24.195991   67149 cri.go:89] found id: ""
	I1028 18:31:24.196017   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.196028   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:24.196036   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:24.196084   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:24.229789   67149 cri.go:89] found id: ""
	I1028 18:31:24.229822   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.229837   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:24.229845   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:24.229906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:24.264724   67149 cri.go:89] found id: ""
	I1028 18:31:24.264748   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.264757   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:24.264765   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:24.264775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:24.303551   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:24.303574   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:24.351693   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:24.351725   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:24.364537   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:24.364566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:24.436935   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:24.436955   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:24.436966   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:25.112870   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.612492   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.902008   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.902544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.902622   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.871089   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.871265   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:29.872201   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.014928   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:27.029540   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:27.029609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:27.064598   67149 cri.go:89] found id: ""
	I1028 18:31:27.064626   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.064636   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:27.064643   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:27.064704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:27.099432   67149 cri.go:89] found id: ""
	I1028 18:31:27.099455   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.099465   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:27.099472   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:27.099531   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:27.133961   67149 cri.go:89] found id: ""
	I1028 18:31:27.133996   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.134006   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:27.134012   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:27.134075   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:27.171976   67149 cri.go:89] found id: ""
	I1028 18:31:27.172003   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.172014   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:27.172022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:27.172092   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:27.205681   67149 cri.go:89] found id: ""
	I1028 18:31:27.205710   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.205721   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:27.205730   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:27.205793   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:27.244571   67149 cri.go:89] found id: ""
	I1028 18:31:27.244603   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.244612   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:27.244617   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:27.244661   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:27.281692   67149 cri.go:89] found id: ""
	I1028 18:31:27.281722   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.281738   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:27.281746   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:27.281800   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:27.335003   67149 cri.go:89] found id: ""
	I1028 18:31:27.335033   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.335041   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:27.335049   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:27.335066   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:27.353992   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:27.354017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:27.457103   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:27.457125   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:27.457136   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.537717   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:27.537746   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:27.579842   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:27.579870   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.133749   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:30.147518   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:30.147576   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:30.182683   67149 cri.go:89] found id: ""
	I1028 18:31:30.182711   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.182722   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:30.182729   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:30.182792   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:30.215088   67149 cri.go:89] found id: ""
	I1028 18:31:30.215109   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.215118   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:30.215124   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:30.215176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:30.250169   67149 cri.go:89] found id: ""
	I1028 18:31:30.250194   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.250202   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:30.250207   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:30.250284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:30.286028   67149 cri.go:89] found id: ""
	I1028 18:31:30.286055   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.286062   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:30.286069   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:30.286112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:30.320503   67149 cri.go:89] found id: ""
	I1028 18:31:30.320528   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.320539   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:30.320547   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:30.320604   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:30.352773   67149 cri.go:89] found id: ""
	I1028 18:31:30.352793   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.352800   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:30.352806   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:30.352859   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:30.385922   67149 cri.go:89] found id: ""
	I1028 18:31:30.385944   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.385951   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:30.385956   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:30.385999   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:30.421909   67149 cri.go:89] found id: ""
	I1028 18:31:30.421933   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.421945   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:30.421956   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:30.421971   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.470917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:30.470944   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:30.484033   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:30.484059   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:30.554810   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:30.554836   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:30.554850   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:30.634403   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:30.634432   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:30.113496   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.613397   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:30.402688   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.902277   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:31.872598   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:34.371198   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:33.182127   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:33.194994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:33.195063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:33.233076   67149 cri.go:89] found id: ""
	I1028 18:31:33.233098   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.233106   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:33.233112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:33.233160   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:33.266963   67149 cri.go:89] found id: ""
	I1028 18:31:33.266998   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.267021   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:33.267028   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:33.267083   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:33.305888   67149 cri.go:89] found id: ""
	I1028 18:31:33.305914   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.305922   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:33.305928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:33.305979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:33.339451   67149 cri.go:89] found id: ""
	I1028 18:31:33.339479   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.339489   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:33.339496   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:33.339555   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:33.375038   67149 cri.go:89] found id: ""
	I1028 18:31:33.375065   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.375073   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:33.375079   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:33.375125   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:33.409157   67149 cri.go:89] found id: ""
	I1028 18:31:33.409176   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.409183   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:33.409189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:33.409243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:33.449108   67149 cri.go:89] found id: ""
	I1028 18:31:33.449133   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.449149   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:33.449155   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:33.449227   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:33.491194   67149 cri.go:89] found id: ""
	I1028 18:31:33.491215   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.491224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:33.491232   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:33.491250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.530590   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:33.530618   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:33.581933   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:33.581962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:33.595387   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:33.595416   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:33.664855   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:33.664882   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:33.664899   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:35.113673   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.612606   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:35.401938   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.402270   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.372499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:38.372670   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.242724   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:36.256152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:36.256221   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:36.292452   67149 cri.go:89] found id: ""
	I1028 18:31:36.292494   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.292504   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:36.292511   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:36.292568   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:36.325210   67149 cri.go:89] found id: ""
	I1028 18:31:36.325231   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.325238   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:36.325244   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:36.325293   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:36.356738   67149 cri.go:89] found id: ""
	I1028 18:31:36.356757   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.356764   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:36.356769   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:36.356827   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:36.389678   67149 cri.go:89] found id: ""
	I1028 18:31:36.389704   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.389712   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:36.389717   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:36.389775   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:36.422956   67149 cri.go:89] found id: ""
	I1028 18:31:36.422989   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.422998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:36.423005   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:36.423061   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:36.456877   67149 cri.go:89] found id: ""
	I1028 18:31:36.456904   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.456914   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:36.456921   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:36.456983   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:36.489728   67149 cri.go:89] found id: ""
	I1028 18:31:36.489758   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.489766   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:36.489772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:36.489829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:36.524307   67149 cri.go:89] found id: ""
	I1028 18:31:36.524338   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.524350   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:36.524360   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:36.524372   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:36.574771   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:36.574800   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:36.587485   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:36.587506   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:36.655922   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:36.655949   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:36.655962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.738312   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:36.738352   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.279425   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:39.293108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:39.293167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:39.325542   67149 cri.go:89] found id: ""
	I1028 18:31:39.325573   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.325584   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:39.325592   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:39.325656   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:39.357581   67149 cri.go:89] found id: ""
	I1028 18:31:39.357609   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.357620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:39.357627   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:39.357681   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:39.394833   67149 cri.go:89] found id: ""
	I1028 18:31:39.394853   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.394860   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:39.394866   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:39.394916   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:39.430151   67149 cri.go:89] found id: ""
	I1028 18:31:39.430178   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.430188   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:39.430196   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:39.430254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:39.468060   67149 cri.go:89] found id: ""
	I1028 18:31:39.468089   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.468100   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:39.468108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:39.468181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:39.503702   67149 cri.go:89] found id: ""
	I1028 18:31:39.503734   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.503752   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:39.503761   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:39.503829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:39.536193   67149 cri.go:89] found id: ""
	I1028 18:31:39.536221   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.536233   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:39.536240   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:39.536305   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:39.570194   67149 cri.go:89] found id: ""
	I1028 18:31:39.570215   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.570224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:39.570232   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:39.570245   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:39.647179   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:39.647207   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:39.647220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:39.725980   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:39.726012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.765671   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:39.765704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:39.818257   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:39.818289   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:39.614055   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.112561   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:39.902061   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.402314   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:40.871483   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.872270   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.332335   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:42.344964   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:42.345031   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:42.380904   67149 cri.go:89] found id: ""
	I1028 18:31:42.380926   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.380933   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:42.380938   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:42.380982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:42.414361   67149 cri.go:89] found id: ""
	I1028 18:31:42.414385   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.414393   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:42.414399   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:42.414443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:42.447931   67149 cri.go:89] found id: ""
	I1028 18:31:42.447957   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.447968   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:42.447975   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:42.448024   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:42.483262   67149 cri.go:89] found id: ""
	I1028 18:31:42.483283   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.483296   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:42.483301   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:42.483365   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:42.516665   67149 cri.go:89] found id: ""
	I1028 18:31:42.516693   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.516702   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:42.516709   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:42.516776   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:42.550160   67149 cri.go:89] found id: ""
	I1028 18:31:42.550181   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.550188   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:42.550193   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:42.550238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:42.583509   67149 cri.go:89] found id: ""
	I1028 18:31:42.583535   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.583546   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:42.583552   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:42.583611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:42.619276   67149 cri.go:89] found id: ""
	I1028 18:31:42.619312   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.619320   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:42.619328   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:42.619338   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:42.692442   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:42.692487   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:42.731768   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:42.731798   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:42.783997   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:42.784043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.797809   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:42.797834   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:42.863351   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.363648   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:45.376277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:45.376341   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:45.415231   67149 cri.go:89] found id: ""
	I1028 18:31:45.415255   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.415265   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:45.415273   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:45.415330   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:45.451133   67149 cri.go:89] found id: ""
	I1028 18:31:45.451157   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.451164   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:45.451170   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:45.451228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:45.483526   67149 cri.go:89] found id: ""
	I1028 18:31:45.483552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.483562   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:45.483567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:45.483621   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:45.515799   67149 cri.go:89] found id: ""
	I1028 18:31:45.515828   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.515838   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:45.515846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:45.515906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:45.548043   67149 cri.go:89] found id: ""
	I1028 18:31:45.548071   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.548082   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:45.548090   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:45.548153   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:45.581525   67149 cri.go:89] found id: ""
	I1028 18:31:45.581552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.581563   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:45.581570   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:45.581629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:45.622258   67149 cri.go:89] found id: ""
	I1028 18:31:45.622282   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.622290   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:45.622296   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:45.622353   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:45.661255   67149 cri.go:89] found id: ""
	I1028 18:31:45.661275   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.661284   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:45.661292   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:45.661304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:45.675209   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:45.675242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:45.737546   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.737573   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:45.737592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:45.816012   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:45.816043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:45.854135   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:45.854167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:44.612155   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.612875   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:44.402557   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.902339   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:45.371918   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:47.872710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.875644   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:48.406233   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:48.418950   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:48.419001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:48.452933   67149 cri.go:89] found id: ""
	I1028 18:31:48.452952   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.452961   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:48.452975   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:48.453034   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:48.489604   67149 cri.go:89] found id: ""
	I1028 18:31:48.489630   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.489640   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:48.489648   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:48.489706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:48.525463   67149 cri.go:89] found id: ""
	I1028 18:31:48.525493   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.525504   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:48.525511   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:48.525566   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:48.559266   67149 cri.go:89] found id: ""
	I1028 18:31:48.559294   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.559302   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:48.559308   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:48.559363   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:48.592670   67149 cri.go:89] found id: ""
	I1028 18:31:48.592695   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.592706   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:48.592714   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:48.592769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:48.627175   67149 cri.go:89] found id: ""
	I1028 18:31:48.627196   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.627205   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:48.627213   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:48.627260   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:48.661864   67149 cri.go:89] found id: ""
	I1028 18:31:48.661887   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.661895   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:48.661901   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:48.661946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:48.696731   67149 cri.go:89] found id: ""
	I1028 18:31:48.696756   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.696765   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:48.696775   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:48.696788   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.745390   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:48.745417   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:48.759218   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:48.759241   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:48.830299   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:48.830331   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:48.830349   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:48.909934   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:48.909963   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:49.112884   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.613217   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.402707   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.903103   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:52.373283   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.872603   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.451597   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:51.464889   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:51.464943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:51.499962   67149 cri.go:89] found id: ""
	I1028 18:31:51.499990   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.500001   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:51.500010   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:51.500069   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:51.532341   67149 cri.go:89] found id: ""
	I1028 18:31:51.532370   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.532380   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:51.532388   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:51.532443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:51.565531   67149 cri.go:89] found id: ""
	I1028 18:31:51.565554   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.565561   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:51.565567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:51.565614   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:51.602859   67149 cri.go:89] found id: ""
	I1028 18:31:51.602882   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.602894   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:51.602899   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:51.602943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:51.639896   67149 cri.go:89] found id: ""
	I1028 18:31:51.639915   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.639922   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:51.639928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:51.639972   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:51.675728   67149 cri.go:89] found id: ""
	I1028 18:31:51.675755   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.675762   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:51.675768   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:51.675825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:51.710285   67149 cri.go:89] found id: ""
	I1028 18:31:51.710312   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.710320   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:51.710326   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:51.710374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:51.744527   67149 cri.go:89] found id: ""
	I1028 18:31:51.744551   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.744560   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:51.744570   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:51.744584   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.780580   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:51.780614   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:51.832979   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:51.833008   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:51.846389   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:51.846415   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:51.918177   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:51.918196   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:51.918210   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.493806   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:54.506468   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:54.506526   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:54.540500   67149 cri.go:89] found id: ""
	I1028 18:31:54.540527   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.540537   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:54.540544   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:54.540601   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:54.573399   67149 cri.go:89] found id: ""
	I1028 18:31:54.573428   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.573438   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:54.573448   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:54.573509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:54.606227   67149 cri.go:89] found id: ""
	I1028 18:31:54.606262   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.606272   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:54.606278   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:54.606338   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:54.641143   67149 cri.go:89] found id: ""
	I1028 18:31:54.641163   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.641172   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:54.641179   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:54.641238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:54.674269   67149 cri.go:89] found id: ""
	I1028 18:31:54.674292   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.674300   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:54.674306   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:54.674352   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:54.707160   67149 cri.go:89] found id: ""
	I1028 18:31:54.707183   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.707191   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:54.707197   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:54.707242   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:54.746522   67149 cri.go:89] found id: ""
	I1028 18:31:54.746544   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.746552   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:54.746558   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:54.746613   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:54.779315   67149 cri.go:89] found id: ""
	I1028 18:31:54.779341   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.779348   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:54.779356   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:54.779367   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:54.830987   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:54.831017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:54.844846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:54.844871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:54.913540   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:54.913558   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:54.913568   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.994220   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:54.994250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:54.112785   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.114029   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.401657   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.402726   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.371756   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:59.372308   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.532820   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:57.545394   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:57.545454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:57.582329   67149 cri.go:89] found id: ""
	I1028 18:31:57.582355   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.582365   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:57.582372   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:57.582438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:57.616082   67149 cri.go:89] found id: ""
	I1028 18:31:57.616107   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.616115   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:57.616123   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:57.616167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:57.650118   67149 cri.go:89] found id: ""
	I1028 18:31:57.650144   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.650153   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:57.650162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:57.650215   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:57.684801   67149 cri.go:89] found id: ""
	I1028 18:31:57.684823   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.684831   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:57.684839   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:57.684887   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:57.722396   67149 cri.go:89] found id: ""
	I1028 18:31:57.722423   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.722431   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:57.722437   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:57.722516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:57.759779   67149 cri.go:89] found id: ""
	I1028 18:31:57.759802   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.759809   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:57.759818   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:57.759861   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:57.793977   67149 cri.go:89] found id: ""
	I1028 18:31:57.794034   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.794045   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:57.794053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:57.794117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:57.831104   67149 cri.go:89] found id: ""
	I1028 18:31:57.831130   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.831140   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:57.831151   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:57.831164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:57.920155   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:57.920174   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:57.920184   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:57.999677   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:57.999709   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:58.036647   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:58.036673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:58.088299   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:58.088333   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.601832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:00.615434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:00.615491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:00.653344   67149 cri.go:89] found id: ""
	I1028 18:32:00.653372   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.653383   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:00.653390   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:00.653450   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:00.693086   67149 cri.go:89] found id: ""
	I1028 18:32:00.693111   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.693122   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:00.693130   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:00.693188   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:00.728129   67149 cri.go:89] found id: ""
	I1028 18:32:00.728157   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.728167   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:00.728181   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:00.728243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:00.760540   67149 cri.go:89] found id: ""
	I1028 18:32:00.760568   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.760579   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:00.760586   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:00.760654   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:00.796633   67149 cri.go:89] found id: ""
	I1028 18:32:00.796662   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.796672   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:00.796680   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:00.796740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:00.829924   67149 cri.go:89] found id: ""
	I1028 18:32:00.829954   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.829966   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:00.829974   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:00.830028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:00.861565   67149 cri.go:89] found id: ""
	I1028 18:32:00.861586   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.861593   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:00.861599   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:00.861655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:00.894129   67149 cri.go:89] found id: ""
	I1028 18:32:00.894154   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.894162   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:00.894169   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:00.894180   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.908303   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:00.908331   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:00.974521   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:00.974543   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:00.974557   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:58.612554   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.612655   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:58.901908   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.902851   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.872423   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.873235   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.048113   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:01.048140   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:01.086657   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:01.086731   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.639781   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:03.652239   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:03.652291   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:03.687098   67149 cri.go:89] found id: ""
	I1028 18:32:03.687120   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.687129   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:03.687135   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:03.687181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:03.722176   67149 cri.go:89] found id: ""
	I1028 18:32:03.722206   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.722217   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:03.722225   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:03.722282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:03.757489   67149 cri.go:89] found id: ""
	I1028 18:32:03.757512   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.757520   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:03.757526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:03.757571   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:03.795359   67149 cri.go:89] found id: ""
	I1028 18:32:03.795400   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.795411   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:03.795429   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:03.795489   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:03.830919   67149 cri.go:89] found id: ""
	I1028 18:32:03.830945   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.830953   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:03.830958   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:03.831008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:03.863396   67149 cri.go:89] found id: ""
	I1028 18:32:03.863425   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.863437   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:03.863445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:03.863516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:03.897085   67149 cri.go:89] found id: ""
	I1028 18:32:03.897112   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.897121   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:03.897128   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:03.897189   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:03.929439   67149 cri.go:89] found id: ""
	I1028 18:32:03.929467   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.929478   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:03.929487   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:03.929503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.982917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:03.982943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:03.996333   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:03.996355   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:04.062786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:04.062813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:04.062827   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:04.143988   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:04.144016   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:03.113499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.612544   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.620294   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.402246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.402730   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.904429   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.373120   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:08.871662   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.683977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:06.696605   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:06.696680   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:06.733031   67149 cri.go:89] found id: ""
	I1028 18:32:06.733060   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.733070   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:06.733078   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:06.733138   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:06.769196   67149 cri.go:89] found id: ""
	I1028 18:32:06.769218   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.769225   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:06.769231   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:06.769280   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:06.806938   67149 cri.go:89] found id: ""
	I1028 18:32:06.806959   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.806966   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:06.806972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:06.807017   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:06.839506   67149 cri.go:89] found id: ""
	I1028 18:32:06.839528   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.839537   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:06.839542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:06.839587   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:06.878275   67149 cri.go:89] found id: ""
	I1028 18:32:06.878300   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.878309   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:06.878317   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:06.878382   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:06.916336   67149 cri.go:89] found id: ""
	I1028 18:32:06.916366   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.916374   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:06.916381   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:06.916434   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:06.971413   67149 cri.go:89] found id: ""
	I1028 18:32:06.971435   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.971443   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:06.971449   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:06.971494   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:07.004432   67149 cri.go:89] found id: ""
	I1028 18:32:07.004464   67149 logs.go:282] 0 containers: []
	W1028 18:32:07.004485   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:07.004496   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:07.004509   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:07.081741   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:07.081780   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:07.122022   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:07.122053   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:07.169470   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:07.169496   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:07.183433   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:07.183459   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:07.251765   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:09.752773   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:09.766042   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:09.766119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:09.802881   67149 cri.go:89] found id: ""
	I1028 18:32:09.802911   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.802923   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:09.802930   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:09.802987   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:09.840269   67149 cri.go:89] found id: ""
	I1028 18:32:09.840292   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.840300   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:09.840305   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:09.840370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:09.874654   67149 cri.go:89] found id: ""
	I1028 18:32:09.874679   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.874689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:09.874696   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:09.874752   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:09.910328   67149 cri.go:89] found id: ""
	I1028 18:32:09.910350   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.910358   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:09.910365   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:09.910425   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:09.942717   67149 cri.go:89] found id: ""
	I1028 18:32:09.942744   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.942752   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:09.942757   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:09.942814   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:09.975644   67149 cri.go:89] found id: ""
	I1028 18:32:09.975674   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.975685   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:09.975692   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:09.975750   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:10.008257   67149 cri.go:89] found id: ""
	I1028 18:32:10.008294   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.008305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:10.008313   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:10.008373   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:10.041678   67149 cri.go:89] found id: ""
	I1028 18:32:10.041705   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.041716   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:10.041726   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:10.041739   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:10.090474   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:10.090503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:10.103846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:10.103874   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:10.172819   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:10.172847   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:10.172862   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:10.251927   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:10.251955   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:10.112553   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.113090   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:10.401890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.902888   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:11.371860   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:13.373112   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.795985   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:12.810859   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:12.810921   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:12.849897   67149 cri.go:89] found id: ""
	I1028 18:32:12.849925   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.849934   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:12.849940   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:12.850003   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:12.883007   67149 cri.go:89] found id: ""
	I1028 18:32:12.883034   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.883045   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:12.883052   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:12.883111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:12.917458   67149 cri.go:89] found id: ""
	I1028 18:32:12.917485   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.917496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:12.917503   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:12.917561   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:12.950531   67149 cri.go:89] found id: ""
	I1028 18:32:12.950558   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.950568   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:12.950576   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:12.950631   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:12.983902   67149 cri.go:89] found id: ""
	I1028 18:32:12.983929   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.983937   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:12.983943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:12.983986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:13.017486   67149 cri.go:89] found id: ""
	I1028 18:32:13.017513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.017521   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:13.017526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:13.017582   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:13.050553   67149 cri.go:89] found id: ""
	I1028 18:32:13.050582   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.050594   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:13.050601   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:13.050658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:13.083489   67149 cri.go:89] found id: ""
	I1028 18:32:13.083513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.083520   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:13.083528   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:13.083537   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:13.137451   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:13.137482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:13.153154   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:13.153179   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:13.221043   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:13.221066   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:13.221080   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:13.299930   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:13.299960   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:15.850484   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:15.862930   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:15.862982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:15.895625   67149 cri.go:89] found id: ""
	I1028 18:32:15.895643   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.895651   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:15.895657   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:15.895701   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:15.928073   67149 cri.go:89] found id: ""
	I1028 18:32:15.928103   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.928113   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:15.928120   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:15.928180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:15.962261   67149 cri.go:89] found id: ""
	I1028 18:32:15.962282   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.962290   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:15.962295   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:15.962342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:15.999177   67149 cri.go:89] found id: ""
	I1028 18:32:15.999206   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.999216   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:15.999224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:15.999282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:16.033098   67149 cri.go:89] found id: ""
	I1028 18:32:16.033126   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.033138   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:16.033145   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:16.033208   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:14.612739   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.112266   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.401576   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.401773   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:18.372059   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:16.067049   67149 cri.go:89] found id: ""
	I1028 18:32:16.067071   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.067083   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:16.067089   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:16.067145   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:16.106936   67149 cri.go:89] found id: ""
	I1028 18:32:16.106970   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.106981   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:16.106988   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:16.107044   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:16.141702   67149 cri.go:89] found id: ""
	I1028 18:32:16.141729   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.141741   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:16.141751   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:16.141762   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:16.178772   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:16.178803   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:16.230851   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:16.230878   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:16.244489   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:16.244514   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:16.319362   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:16.319389   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:16.319405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:18.899694   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:18.913287   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:18.913358   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:18.954136   67149 cri.go:89] found id: ""
	I1028 18:32:18.954158   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.954165   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:18.954170   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:18.954218   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:18.987427   67149 cri.go:89] found id: ""
	I1028 18:32:18.987449   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.987457   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:18.987462   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:18.987505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:19.022067   67149 cri.go:89] found id: ""
	I1028 18:32:19.022099   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.022110   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:19.022118   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:19.022167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:19.054533   67149 cri.go:89] found id: ""
	I1028 18:32:19.054560   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.054570   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:19.054578   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:19.054644   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:19.099324   67149 cri.go:89] found id: ""
	I1028 18:32:19.099356   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.099367   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:19.099375   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:19.099436   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:19.146437   67149 cri.go:89] found id: ""
	I1028 18:32:19.146463   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.146470   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:19.146478   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:19.146540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:19.192027   67149 cri.go:89] found id: ""
	I1028 18:32:19.192053   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.192070   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:19.192078   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:19.192140   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:19.228411   67149 cri.go:89] found id: ""
	I1028 18:32:19.228437   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.228447   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:19.228457   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:19.228480   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:19.313151   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:19.313183   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:19.352117   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:19.352142   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:19.402772   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:19.402805   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:19.416148   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:19.416167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:19.483098   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:19.112720   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.611924   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:19.403635   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.902116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:20.872280   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:22.872726   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.983420   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:21.997129   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:21.997180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:22.035600   67149 cri.go:89] found id: ""
	I1028 18:32:22.035622   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.035631   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:22.035637   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:22.035684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:22.073413   67149 cri.go:89] found id: ""
	I1028 18:32:22.073440   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.073450   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:22.073458   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:22.073505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:22.108637   67149 cri.go:89] found id: ""
	I1028 18:32:22.108663   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.108673   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:22.108682   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:22.108740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:22.145837   67149 cri.go:89] found id: ""
	I1028 18:32:22.145860   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.145867   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:22.145873   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:22.145928   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:22.183830   67149 cri.go:89] found id: ""
	I1028 18:32:22.183855   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.183864   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:22.183869   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:22.183917   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:22.221402   67149 cri.go:89] found id: ""
	I1028 18:32:22.221423   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.221430   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:22.221436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:22.221484   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:22.262193   67149 cri.go:89] found id: ""
	I1028 18:32:22.262220   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.262229   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:22.262234   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:22.262297   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:22.298774   67149 cri.go:89] found id: ""
	I1028 18:32:22.298797   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.298808   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:22.298819   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:22.298831   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:22.348677   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:22.348716   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:22.362199   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:22.362220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:22.429304   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:22.429327   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:22.429345   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:22.511591   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:22.511623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.049119   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:25.063910   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:25.063970   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:25.099795   67149 cri.go:89] found id: ""
	I1028 18:32:25.099822   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.099833   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:25.099840   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:25.099898   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:25.137957   67149 cri.go:89] found id: ""
	I1028 18:32:25.137985   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.137995   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:25.138002   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:25.138063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:25.174687   67149 cri.go:89] found id: ""
	I1028 18:32:25.174715   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.174726   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:25.174733   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:25.174795   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:25.207039   67149 cri.go:89] found id: ""
	I1028 18:32:25.207067   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.207077   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:25.207084   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:25.207130   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:25.239961   67149 cri.go:89] found id: ""
	I1028 18:32:25.239990   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.239998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:25.240004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:25.240055   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:25.273823   67149 cri.go:89] found id: ""
	I1028 18:32:25.273848   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.273858   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:25.273865   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:25.273925   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:25.310725   67149 cri.go:89] found id: ""
	I1028 18:32:25.310754   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.310765   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:25.310772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:25.310830   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:25.348724   67149 cri.go:89] found id: ""
	I1028 18:32:25.348749   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.348760   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:25.348770   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:25.348784   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:25.430213   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:25.430243   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.472233   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:25.472263   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:25.525648   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:25.525676   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:25.538697   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:25.538721   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:25.606779   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:23.612901   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.112494   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:23.902733   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.402271   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:25.372428   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:27.870461   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:29.871824   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.107877   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:28.122241   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:28.122296   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:28.157042   67149 cri.go:89] found id: ""
	I1028 18:32:28.157070   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.157082   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:28.157089   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:28.157142   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:28.190625   67149 cri.go:89] found id: ""
	I1028 18:32:28.190648   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.190658   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:28.190666   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:28.190724   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:28.224528   67149 cri.go:89] found id: ""
	I1028 18:32:28.224551   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.224559   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:28.224565   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:28.224609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:28.265073   67149 cri.go:89] found id: ""
	I1028 18:32:28.265100   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.265110   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:28.265116   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:28.265174   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:28.302598   67149 cri.go:89] found id: ""
	I1028 18:32:28.302623   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.302633   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:28.302640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:28.302697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:28.339757   67149 cri.go:89] found id: ""
	I1028 18:32:28.339781   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.339789   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:28.339794   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:28.339846   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:28.375185   67149 cri.go:89] found id: ""
	I1028 18:32:28.375213   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.375224   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:28.375231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:28.375294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:28.413292   67149 cri.go:89] found id: ""
	I1028 18:32:28.413316   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.413334   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:28.413344   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:28.413376   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:28.464069   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:28.464098   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:28.478275   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:28.478299   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:28.546483   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.546504   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:28.546515   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:28.623015   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:28.623041   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:28.613303   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.111518   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.403789   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:30.903113   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:32.371951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:34.372820   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.161570   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:31.175056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:31.175119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:31.210163   67149 cri.go:89] found id: ""
	I1028 18:32:31.210187   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.210199   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:31.210207   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:31.210264   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:31.244605   67149 cri.go:89] found id: ""
	I1028 18:32:31.244630   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.244637   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:31.244643   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:31.244688   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:31.280793   67149 cri.go:89] found id: ""
	I1028 18:32:31.280818   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.280827   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:31.280833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:31.280890   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:31.314616   67149 cri.go:89] found id: ""
	I1028 18:32:31.314641   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.314649   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:31.314654   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:31.314709   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:31.349386   67149 cri.go:89] found id: ""
	I1028 18:32:31.349410   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.349417   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:31.349423   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:31.349469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:31.382831   67149 cri.go:89] found id: ""
	I1028 18:32:31.382861   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.382871   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:31.382879   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:31.382924   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:31.417365   67149 cri.go:89] found id: ""
	I1028 18:32:31.417391   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.417400   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:31.417410   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:31.417469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:31.450631   67149 cri.go:89] found id: ""
	I1028 18:32:31.450660   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.450672   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:31.450683   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:31.450697   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.488932   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:31.488959   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:31.539335   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:31.539361   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:31.552304   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:31.552328   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:31.629291   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:31.629308   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:31.629323   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.207517   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:34.221231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:34.221310   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:34.255342   67149 cri.go:89] found id: ""
	I1028 18:32:34.255365   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.255373   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:34.255379   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:34.255438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:34.303802   67149 cri.go:89] found id: ""
	I1028 18:32:34.303827   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.303836   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:34.303843   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:34.303896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:34.339531   67149 cri.go:89] found id: ""
	I1028 18:32:34.339568   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.339579   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:34.339589   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:34.339653   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:34.374063   67149 cri.go:89] found id: ""
	I1028 18:32:34.374084   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.374094   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:34.374102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:34.374155   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:34.410880   67149 cri.go:89] found id: ""
	I1028 18:32:34.410909   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.410918   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:34.410924   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:34.410971   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:34.445372   67149 cri.go:89] found id: ""
	I1028 18:32:34.445397   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.445408   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:34.445416   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:34.445474   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:34.477820   67149 cri.go:89] found id: ""
	I1028 18:32:34.477844   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.477851   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:34.477857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:34.477909   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:34.517581   67149 cri.go:89] found id: ""
	I1028 18:32:34.517602   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.517609   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:34.517618   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:34.517632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:34.530407   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:34.530430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:34.599055   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:34.599083   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:34.599096   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.681579   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:34.681612   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:34.720523   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:34.720550   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:33.111858   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.112216   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.613521   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:33.401782   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.402544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.901848   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:36.871451   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.372642   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.272697   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:37.289091   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:37.289159   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:37.321600   67149 cri.go:89] found id: ""
	I1028 18:32:37.321628   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.321639   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:37.321647   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:37.321704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:37.353296   67149 cri.go:89] found id: ""
	I1028 18:32:37.353324   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.353337   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:37.353343   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:37.353400   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:37.386299   67149 cri.go:89] found id: ""
	I1028 18:32:37.386321   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.386328   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:37.386333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:37.386401   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:37.420992   67149 cri.go:89] found id: ""
	I1028 18:32:37.421026   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.421039   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:37.421047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:37.421117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:37.456174   67149 cri.go:89] found id: ""
	I1028 18:32:37.456206   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.456217   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:37.456224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:37.456284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:37.491796   67149 cri.go:89] found id: ""
	I1028 18:32:37.491819   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.491827   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:37.491833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:37.491878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:37.529002   67149 cri.go:89] found id: ""
	I1028 18:32:37.529028   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.529039   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:37.529047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:37.529111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:37.568967   67149 cri.go:89] found id: ""
	I1028 18:32:37.568993   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.569001   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:37.569010   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:37.569022   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:37.640041   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:37.640065   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:37.640076   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:37.725490   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:37.725524   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:37.771858   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:37.771879   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.821240   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:37.821271   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.334946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:40.349147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:40.349216   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:40.383931   67149 cri.go:89] found id: ""
	I1028 18:32:40.383956   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.383966   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:40.383973   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:40.384028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:40.419877   67149 cri.go:89] found id: ""
	I1028 18:32:40.419905   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.419915   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:40.419922   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:40.419978   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:40.453659   67149 cri.go:89] found id: ""
	I1028 18:32:40.453681   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.453689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:40.453695   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:40.453744   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:40.486299   67149 cri.go:89] found id: ""
	I1028 18:32:40.486326   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.486343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:40.486350   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:40.486407   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:40.518309   67149 cri.go:89] found id: ""
	I1028 18:32:40.518334   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.518344   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:40.518351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:40.518402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:40.549008   67149 cri.go:89] found id: ""
	I1028 18:32:40.549040   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.549049   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:40.549055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:40.549108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:40.586157   67149 cri.go:89] found id: ""
	I1028 18:32:40.586177   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.586184   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:40.586189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:40.586232   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:40.621107   67149 cri.go:89] found id: ""
	I1028 18:32:40.621133   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.621144   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:40.621153   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:40.621164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.633793   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:40.633816   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:40.700370   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:40.700393   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:40.700405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:40.780964   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:40.780993   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:40.819904   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:40.819928   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:40.112755   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:42.113116   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.903476   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.904639   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.872360   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.371399   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:43.371487   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:43.384387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:43.384445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:43.419889   67149 cri.go:89] found id: ""
	I1028 18:32:43.419922   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.419931   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:43.419937   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:43.419997   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:43.455177   67149 cri.go:89] found id: ""
	I1028 18:32:43.455209   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.455219   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:43.455227   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:43.455295   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:43.493070   67149 cri.go:89] found id: ""
	I1028 18:32:43.493094   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.493104   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:43.493111   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:43.493170   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:43.526164   67149 cri.go:89] found id: ""
	I1028 18:32:43.526191   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.526199   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:43.526205   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:43.526254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:43.559225   67149 cri.go:89] found id: ""
	I1028 18:32:43.559252   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.559263   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:43.559270   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:43.559323   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:43.597178   67149 cri.go:89] found id: ""
	I1028 18:32:43.597198   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.597206   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:43.597212   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:43.597276   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:43.633179   67149 cri.go:89] found id: ""
	I1028 18:32:43.633200   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.633209   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:43.633214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:43.633290   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:43.669567   67149 cri.go:89] found id: ""
	I1028 18:32:43.669596   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.669605   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:43.669615   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:43.669631   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:43.737618   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:43.737638   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:43.737650   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:43.821394   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:43.821425   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:43.859924   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:43.859950   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.913539   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:43.913566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:44.611539   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.613781   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.401399   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.401930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.371445   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.372075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.429021   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:46.443137   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:46.443197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:46.480363   67149 cri.go:89] found id: ""
	I1028 18:32:46.480385   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.480394   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:46.480400   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:46.480452   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:46.514702   67149 cri.go:89] found id: ""
	I1028 18:32:46.514731   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.514738   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:46.514744   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:46.514796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:46.546829   67149 cri.go:89] found id: ""
	I1028 18:32:46.546857   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.546868   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:46.546874   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:46.546920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:46.580372   67149 cri.go:89] found id: ""
	I1028 18:32:46.580398   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.580407   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:46.580415   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:46.580491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:46.615455   67149 cri.go:89] found id: ""
	I1028 18:32:46.615479   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.615489   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:46.615497   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:46.615556   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:46.649547   67149 cri.go:89] found id: ""
	I1028 18:32:46.649570   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.649577   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:46.649583   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:46.649641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:46.684744   67149 cri.go:89] found id: ""
	I1028 18:32:46.684768   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.684779   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:46.684787   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:46.684852   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:46.725530   67149 cri.go:89] found id: ""
	I1028 18:32:46.725558   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.725569   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:46.725578   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:46.725592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:46.794487   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:46.794506   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:46.794517   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:46.881407   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:46.881438   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:46.921649   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:46.921671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:46.972915   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:46.972947   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.486835   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:49.501445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:49.501509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:49.537356   67149 cri.go:89] found id: ""
	I1028 18:32:49.537377   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.537384   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:49.537389   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:49.537443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:49.568514   67149 cri.go:89] found id: ""
	I1028 18:32:49.568541   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.568549   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:49.568555   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:49.568610   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:49.602300   67149 cri.go:89] found id: ""
	I1028 18:32:49.602324   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.602333   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:49.602342   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:49.602390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:49.640326   67149 cri.go:89] found id: ""
	I1028 18:32:49.640356   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.640366   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:49.640376   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:49.640437   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:49.675145   67149 cri.go:89] found id: ""
	I1028 18:32:49.675175   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.675183   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:49.675189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:49.675235   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:49.711104   67149 cri.go:89] found id: ""
	I1028 18:32:49.711129   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.711139   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:49.711147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:49.711206   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:49.748316   67149 cri.go:89] found id: ""
	I1028 18:32:49.748366   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.748378   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:49.748385   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:49.748441   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:49.781620   67149 cri.go:89] found id: ""
	I1028 18:32:49.781646   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.781656   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:49.781665   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:49.781679   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.795119   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:49.795143   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:49.870438   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:49.870519   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:49.870539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:49.956845   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:49.956875   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:49.993067   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:49.993097   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:49.112102   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:51.612691   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.901950   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.902354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.903627   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.871412   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.871499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:54.874588   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.543260   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:52.556524   67149 kubeadm.go:597] duration metric: took 4m2.404527005s to restartPrimaryControlPlane
	W1028 18:32:52.556602   67149 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:52.556639   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:32:53.011065   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:32:53.026226   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:32:53.035868   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:32:53.045257   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:32:53.045271   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:32:53.045302   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:32:53.054383   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:32:53.054430   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:32:53.063665   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:32:53.073006   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:32:53.073054   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:32:53.083156   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.092700   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:32:53.092742   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.102374   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:32:53.112072   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:32:53.112121   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:32:53.122102   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:32:53.347625   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:32:53.613118   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:56.111841   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:55.402354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.902406   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.371909   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:59.872630   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.112962   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:00.613499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.896006   66801 pod_ready.go:82] duration metric: took 4m0.00005957s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	E1028 18:32:58.896033   66801 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:32:58.896052   66801 pod_ready.go:39] duration metric: took 4m13.055181811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:32:58.896092   66801 kubeadm.go:597] duration metric: took 4m21.540757653s to restartPrimaryControlPlane
	W1028 18:32:58.896147   66801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:58.896173   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:02.372443   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:04.871981   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:03.113038   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:05.114488   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:07.612365   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:06.872705   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.371018   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.612856   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:12.114228   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:11.371831   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:13.372636   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:14.613213   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.113328   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:15.871907   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.872203   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:19.612892   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:21.613052   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:20.370964   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:22.371880   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:24.372718   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:25.039296   66801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.14309835s)
	I1028 18:33:25.039378   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:25.056172   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:25.066775   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:25.077717   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:25.077734   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:25.077770   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:33:25.086924   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:25.086968   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:25.096867   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:33:25.106162   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:25.106205   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:25.117015   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.126191   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:25.126245   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.135691   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:33:25.144827   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:25.144867   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:25.153834   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:25.201789   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:33:25.201866   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:33:25.306568   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:33:25.306717   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:33:25.306845   66801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:33:25.314339   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:33:25.316173   66801 out.go:235]   - Generating certificates and keys ...
	I1028 18:33:25.316271   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:33:25.316345   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:33:25.316463   66801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:33:25.316571   66801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:33:25.316688   66801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:33:25.316768   66801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:33:25.316857   66801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:33:25.316943   66801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:33:25.317047   66801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:33:25.317149   66801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:33:25.317209   66801 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:33:25.317299   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:33:25.643056   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:33:25.723345   66801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:33:25.831628   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:33:25.908255   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:33:26.215149   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:33:26.215654   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:33:26.218291   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:33:24.111834   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.113295   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.220065   66801 out.go:235]   - Booting up control plane ...
	I1028 18:33:26.220170   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:33:26.220251   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:33:26.220336   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:33:26.239633   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:33:26.245543   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:33:26.245612   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:33:26.378154   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:33:26.378332   66801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:33:26.879957   66801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.937575ms
	I1028 18:33:26.880090   66801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:33:26.365771   67489 pod_ready.go:82] duration metric: took 4m0.000286415s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:26.365796   67489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:26.365812   67489 pod_ready.go:39] duration metric: took 4m12.539631154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:26.365837   67489 kubeadm.go:597] duration metric: took 4m19.835720994s to restartPrimaryControlPlane
	W1028 18:33:26.365884   67489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:26.365910   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:31.882091   66801 kubeadm.go:310] [api-check] The API server is healthy after 5.002114527s
	I1028 18:33:31.897915   66801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:33:31.914311   66801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:33:31.943604   66801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:33:31.943859   66801 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-051152 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:33:31.954350   66801 kubeadm.go:310] [bootstrap-token] Using token: h7eyzq.87sgylc03ke6zhfy
	I1028 18:33:28.613480   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.113034   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.955444   66801 out.go:235]   - Configuring RBAC rules ...
	I1028 18:33:31.955591   66801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:33:31.960749   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:33:31.967695   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:33:31.970863   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:33:31.973924   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:33:31.979191   66801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:33:32.291512   66801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:33:32.714999   66801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:33:33.291889   66801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:33:33.293069   66801 kubeadm.go:310] 
	I1028 18:33:33.293167   66801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:33:33.293182   66801 kubeadm.go:310] 
	I1028 18:33:33.293255   66801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:33:33.293268   66801 kubeadm.go:310] 
	I1028 18:33:33.293307   66801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:33:33.293372   66801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:33:33.293435   66801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:33:33.293447   66801 kubeadm.go:310] 
	I1028 18:33:33.293518   66801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:33:33.293526   66801 kubeadm.go:310] 
	I1028 18:33:33.293595   66801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:33:33.293624   66801 kubeadm.go:310] 
	I1028 18:33:33.293712   66801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:33:33.293842   66801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:33:33.293946   66801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:33:33.293960   66801 kubeadm.go:310] 
	I1028 18:33:33.294117   66801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:33:33.294196   66801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:33:33.294203   66801 kubeadm.go:310] 
	I1028 18:33:33.294276   66801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294385   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:33:33.294414   66801 kubeadm.go:310] 	--control-plane 
	I1028 18:33:33.294427   66801 kubeadm.go:310] 
	I1028 18:33:33.294515   66801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:33:33.294525   66801 kubeadm.go:310] 
	I1028 18:33:33.294629   66801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294774   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:33:33.295715   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:33:33.295839   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:33:33.295852   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:33:33.297447   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:33:33.298607   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:33:33.311113   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:33:33.329576   66801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:33:33.329634   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:33.329680   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-051152 minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=no-preload-051152 minikube.k8s.io/primary=true
	I1028 18:33:33.355186   66801 ops.go:34] apiserver oom_adj: -16
	I1028 18:33:33.509281   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.009672   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.509515   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.010084   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.509359   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.009689   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.509671   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.009884   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.510004   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.615853   66801 kubeadm.go:1113] duration metric: took 4.286272328s to wait for elevateKubeSystemPrivileges
	I1028 18:33:37.615890   66801 kubeadm.go:394] duration metric: took 5m0.313982235s to StartCluster
	I1028 18:33:37.615913   66801 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.616000   66801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:33:37.618418   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.618741   66801 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:33:37.618857   66801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:33:37.618951   66801 addons.go:69] Setting storage-provisioner=true in profile "no-preload-051152"
	I1028 18:33:37.618963   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:33:37.618975   66801 addons.go:69] Setting default-storageclass=true in profile "no-preload-051152"
	I1028 18:33:37.619001   66801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-051152"
	I1028 18:33:37.618973   66801 addons.go:234] Setting addon storage-provisioner=true in "no-preload-051152"
	W1028 18:33:37.619019   66801 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:33:37.619012   66801 addons.go:69] Setting metrics-server=true in profile "no-preload-051152"
	I1028 18:33:37.619043   66801 addons.go:234] Setting addon metrics-server=true in "no-preload-051152"
	I1028 18:33:37.619047   66801 host.go:66] Checking if "no-preload-051152" exists ...
	W1028 18:33:37.619056   66801 addons.go:243] addon metrics-server should already be in state true
	I1028 18:33:37.619097   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.619417   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619446   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619472   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619488   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619487   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619521   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.620738   66801 out.go:177] * Verifying Kubernetes components...
	I1028 18:33:37.622165   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:33:37.636006   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1028 18:33:37.636285   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I1028 18:33:37.636536   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.636621   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.637055   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637082   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637344   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637368   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637419   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637634   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637811   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.638112   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.638157   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.638738   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1028 18:33:37.639176   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.639609   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.639632   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.639918   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.640333   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.640375   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.641571   66801 addons.go:234] Setting addon default-storageclass=true in "no-preload-051152"
	W1028 18:33:37.641592   66801 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:33:37.641620   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.641947   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.641981   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.657758   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I1028 18:33:37.657834   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I1028 18:33:37.657942   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I1028 18:33:37.658187   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658335   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658739   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658752   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658877   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658896   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658931   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.659309   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659358   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659409   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.659428   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.659552   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.659934   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.659964   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.660163   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.660406   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.661568   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.662429   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.663435   66801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:33:37.664414   66801 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:33:33.613699   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:36.111831   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:37.665306   66801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.665324   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:33:37.665343   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.666055   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:33:37.666073   66801 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:33:37.666092   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.668918   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669385   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669519   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.669543   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669754   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.669942   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.670093   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.670266   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.670513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.670556   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.670719   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.670851   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.671014   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.671115   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.677419   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I1028 18:33:37.677828   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.678184   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.678201   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.678476   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.678686   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.680177   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.680403   66801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.680420   66801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:33:37.680437   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.683981   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.684534   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.685007   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.685153   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.685307   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.832104   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:33:37.859406   66801 node_ready.go:35] waiting up to 6m0s for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873437   66801 node_ready.go:49] node "no-preload-051152" has status "Ready":"True"
	I1028 18:33:37.873460   66801 node_ready.go:38] duration metric: took 14.023686ms for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873470   66801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:37.888286   66801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:37.917341   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:33:37.917363   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:33:37.948690   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:33:37.948716   66801 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:33:37.967948   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.971737   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.998758   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:37.998782   66801 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:33:38.034907   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:38.924695   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924720   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.924762   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924828   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925048   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925079   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925093   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925105   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925128   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925131   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925142   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925153   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925154   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925164   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925372   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925397   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925382   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926852   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926857   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.926872   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.955462   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.955492   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.955858   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.955938   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.955953   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373144   66801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.338192413s)
	I1028 18:33:39.373209   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373224   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373512   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373529   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373537   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373544   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373761   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373775   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373785   66801 addons.go:475] Verifying addon metrics-server=true in "no-preload-051152"
	I1028 18:33:39.375584   66801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:33:38.113078   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:40.612141   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.612763   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:39.377031   66801 addons.go:510] duration metric: took 1.758176418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:33:39.906691   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.396083   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:44.894264   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:46.396937   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.397023   66801 pod_ready.go:82] duration metric: took 8.508709164s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.397048   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402560   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.402579   66801 pod_ready.go:82] duration metric: took 5.5155ms for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402588   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406630   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.406646   66801 pod_ready.go:82] duration metric: took 4.052513ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406654   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411238   66801 pod_ready.go:93] pod "kube-proxy-28qht" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.411253   66801 pod_ready.go:82] duration metric: took 4.592983ms for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411260   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414867   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.414880   66801 pod_ready.go:82] duration metric: took 3.615132ms for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414886   66801 pod_ready.go:39] duration metric: took 8.541406133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:46.414900   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:33:46.414943   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:33:46.430889   66801 api_server.go:72] duration metric: took 8.81211088s to wait for apiserver process to appear ...
	I1028 18:33:46.430907   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:33:46.430925   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:33:46.435248   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:33:46.435963   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:33:46.435978   66801 api_server.go:131] duration metric: took 5.065719ms to wait for apiserver health ...
	I1028 18:33:46.435984   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:33:46.596186   66801 system_pods.go:59] 9 kube-system pods found
	I1028 18:33:46.596222   66801 system_pods.go:61] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.596230   66801 system_pods.go:61] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.596234   66801 system_pods.go:61] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.596238   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.596242   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.596246   66801 system_pods.go:61] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.596252   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.596301   66801 system_pods.go:61] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.596317   66801 system_pods.go:61] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.596324   66801 system_pods.go:74] duration metric: took 160.335823ms to wait for pod list to return data ...
	I1028 18:33:46.596341   66801 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:33:46.793115   66801 default_sa.go:45] found service account: "default"
	I1028 18:33:46.793147   66801 default_sa.go:55] duration metric: took 196.795286ms for default service account to be created ...
	I1028 18:33:46.793157   66801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:33:46.995868   66801 system_pods.go:86] 9 kube-system pods found
	I1028 18:33:46.995899   66801 system_pods.go:89] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.995905   66801 system_pods.go:89] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.995909   66801 system_pods.go:89] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.995912   66801 system_pods.go:89] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.995917   66801 system_pods.go:89] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.995920   66801 system_pods.go:89] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.995924   66801 system_pods.go:89] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.995929   66801 system_pods.go:89] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.995934   66801 system_pods.go:89] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.995941   66801 system_pods.go:126] duration metric: took 202.778451ms to wait for k8s-apps to be running ...
	I1028 18:33:46.995946   66801 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:33:46.995990   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:47.011260   66801 system_svc.go:56] duration metric: took 15.302599ms WaitForService to wait for kubelet
	I1028 18:33:47.011285   66801 kubeadm.go:582] duration metric: took 9.392510785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:33:47.011303   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:33:47.193217   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:33:47.193239   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:33:47.193250   66801 node_conditions.go:105] duration metric: took 181.942948ms to run NodePressure ...
	I1028 18:33:47.193261   66801 start.go:241] waiting for startup goroutines ...
	I1028 18:33:47.193267   66801 start.go:246] waiting for cluster config update ...
	I1028 18:33:47.193278   66801 start.go:255] writing updated cluster config ...
	I1028 18:33:47.193529   66801 ssh_runner.go:195] Run: rm -f paused
	I1028 18:33:47.240247   66801 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:33:47.242139   66801 out.go:177] * Done! kubectl is now configured to use "no-preload-051152" cluster and "default" namespace by default
	I1028 18:33:45.112037   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:47.112764   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:48.107354   66600 pod_ready.go:82] duration metric: took 4m0.001062902s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:48.107377   66600 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:48.107395   66600 pod_ready.go:39] duration metric: took 4m13.535788316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:48.107420   66600 kubeadm.go:597] duration metric: took 4m22.316644235s to restartPrimaryControlPlane
	W1028 18:33:48.107467   66600 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:48.107490   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:52.667497   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.301566887s)
	I1028 18:33:52.667559   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:52.683580   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:52.695334   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:52.705505   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:52.705524   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:52.705569   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:33:52.714922   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:52.714969   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:52.724156   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:33:52.733125   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:52.733161   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:52.742369   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.751021   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:52.751065   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.760543   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:33:52.770939   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:52.770985   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:52.781890   67489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:52.961562   67489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:01.798408   67489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:01.798470   67489 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:01.798580   67489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:01.798724   67489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:01.798811   67489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:01.798882   67489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:01.800228   67489 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:01.800320   67489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:01.800392   67489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:01.800486   67489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:01.800580   67489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:01.800641   67489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:01.800694   67489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:01.800764   67489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:01.800842   67489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:01.800955   67489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:01.801019   67489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:01.801053   67489 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:01.801102   67489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:01.801145   67489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:01.801196   67489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:01.801252   67489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:01.801316   67489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:01.801409   67489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:01.801513   67489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:01.801605   67489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:01.802967   67489 out.go:235]   - Booting up control plane ...
	I1028 18:34:01.803061   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:01.803169   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:01.803254   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:01.803376   67489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:01.803488   67489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:01.803558   67489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:01.803685   67489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:01.803800   67489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:01.803869   67489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.148945ms
	I1028 18:34:01.803933   67489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:01.803986   67489 kubeadm.go:310] [api-check] The API server is healthy after 5.003798359s
	I1028 18:34:01.804081   67489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:01.804187   67489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:01.804240   67489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:01.804438   67489 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-692033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:01.804533   67489 kubeadm.go:310] [bootstrap-token] Using token: wy8zqj.38m6tcr6hp7sgzod
	I1028 18:34:01.805760   67489 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:01.805856   67489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:01.805949   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:01.806108   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:01.806233   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:01.806378   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:01.806464   67489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:01.806579   67489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:01.806633   67489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:01.806673   67489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:01.806679   67489 kubeadm.go:310] 
	I1028 18:34:01.806735   67489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:01.806746   67489 kubeadm.go:310] 
	I1028 18:34:01.806836   67489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:01.806844   67489 kubeadm.go:310] 
	I1028 18:34:01.806880   67489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:01.806957   67489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:01.807001   67489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:01.807007   67489 kubeadm.go:310] 
	I1028 18:34:01.807060   67489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:01.807071   67489 kubeadm.go:310] 
	I1028 18:34:01.807112   67489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:01.807118   67489 kubeadm.go:310] 
	I1028 18:34:01.807171   67489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:01.807246   67489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:01.807307   67489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:01.807313   67489 kubeadm.go:310] 
	I1028 18:34:01.807387   67489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:01.807454   67489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:01.807465   67489 kubeadm.go:310] 
	I1028 18:34:01.807538   67489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807634   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:01.807655   67489 kubeadm.go:310] 	--control-plane 
	I1028 18:34:01.807661   67489 kubeadm.go:310] 
	I1028 18:34:01.807730   67489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:01.807739   67489 kubeadm.go:310] 
	I1028 18:34:01.807810   67489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807913   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:01.807923   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:34:01.807929   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:01.809168   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:01.810293   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:01.822030   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:01.842831   67489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:01.842908   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:01.842963   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-692033 minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=default-k8s-diff-port-692033 minikube.k8s.io/primary=true
	I1028 18:34:01.875265   67489 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:02.050422   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:02.550824   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.050477   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.551245   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.051177   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.550572   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.051071   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.550926   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.638447   67489 kubeadm.go:1113] duration metric: took 3.795598924s to wait for elevateKubeSystemPrivileges
	I1028 18:34:05.638483   67489 kubeadm.go:394] duration metric: took 4m59.162037455s to StartCluster
	I1028 18:34:05.638504   67489 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.638591   67489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:05.641196   67489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.641497   67489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:05.641626   67489 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:05.641720   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:05.641730   67489 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641748   67489 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641760   67489 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:05.641776   67489 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641781   67489 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641792   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.641794   67489 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641803   67489 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:05.641804   67489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-692033"
	I1028 18:34:05.641832   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.642210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642217   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642229   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642245   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642255   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642314   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642905   67489 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:05.644361   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:05.658478   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I1028 18:34:05.658586   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1028 18:34:05.659040   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659044   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659524   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659546   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659701   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659724   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659879   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660044   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660111   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.660610   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.660648   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.661748   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1028 18:34:05.662150   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.662607   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.662627   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.662983   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.662991   67489 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.663006   67489 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:05.663029   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.663294   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663334   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.663531   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663572   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.675955   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1028 18:34:05.676345   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.676784   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.676802   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.677154   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.677358   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.678723   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I1028 18:34:05.678897   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1028 18:34:05.679025   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.679243   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679337   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679700   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679715   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.679805   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679823   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.680500   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680506   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680706   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.680834   67489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:05.681042   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.681070   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.681982   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:05.682005   67489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:05.682035   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.682363   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.683806   67489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:05.684992   67489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.685011   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:05.685029   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.686903   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.686957   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.686973   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.687218   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.687429   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.687693   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.687850   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.688516   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.688908   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.688933   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.689193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.689372   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.689513   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.689655   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.696743   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I1028 18:34:05.697029   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.697432   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.697458   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.697697   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.697843   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.699192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.699397   67489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.699405   67489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:05.699416   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.702897   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.703368   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703483   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.703667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.703841   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.703996   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.838049   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:05.857829   67489 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866141   67489 node_ready.go:49] node "default-k8s-diff-port-692033" has status "Ready":"True"
	I1028 18:34:05.866158   67489 node_ready.go:38] duration metric: took 8.296617ms for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866167   67489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:05.873027   67489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:05.927585   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:05.927608   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:05.928743   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.946390   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.961712   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:05.961734   67489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:05.993688   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:05.993711   67489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:06.097871   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:06.696189   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696226   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696195   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696300   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696696   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696713   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696697   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696721   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696735   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696742   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696750   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696722   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696794   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696984   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697000   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.697027   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697036   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.720324   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.720346   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.720649   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.720668   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262166   67489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.164245646s)
	I1028 18:34:07.262256   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262277   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262587   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262608   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262607   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262616   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262625   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262890   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262923   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262936   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262948   67489 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-692033"
	I1028 18:34:07.264414   67489 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:07.265449   67489 addons.go:510] duration metric: took 1.623834435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:07.882264   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.313629   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.206119005s)
	I1028 18:34:14.313702   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:14.329212   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:34:14.339407   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:14.349645   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:14.349669   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:14.349716   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:14.359332   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:14.359384   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:14.369627   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:14.381040   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:14.381098   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:14.390359   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.399743   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:14.399783   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.408932   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:14.417840   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:14.417876   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:14.427234   66600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:14.472502   66600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:14.472593   66600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:14.578311   66600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:14.578456   66600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:14.578576   66600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:14.586748   66600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:10.380304   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:12.878632   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.878951   67489 pod_ready.go:93] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:14.878974   67489 pod_ready.go:82] duration metric: took 9.005915421s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:14.878983   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385215   67489 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.385239   67489 pod_ready.go:82] duration metric: took 506.249352ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385250   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390412   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.390435   67489 pod_ready.go:82] duration metric: took 5.177559ms for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390448   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395252   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.395272   67489 pod_ready.go:82] duration metric: took 4.816812ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395281   67489 pod_ready.go:39] duration metric: took 9.52910413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:15.395298   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:15.395349   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:15.413693   67489 api_server.go:72] duration metric: took 9.772160727s to wait for apiserver process to appear ...
	I1028 18:34:15.413715   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:15.413734   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:34:15.417780   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:34:15.418688   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:15.418712   67489 api_server.go:131] duration metric: took 4.989226ms to wait for apiserver health ...
	I1028 18:34:15.418720   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:15.424285   67489 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:15.424306   67489 system_pods.go:61] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.424310   67489 system_pods.go:61] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.424315   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.424318   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.424323   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.424327   67489 system_pods.go:61] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.424331   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.424337   67489 system_pods.go:61] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.424344   67489 system_pods.go:61] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.424351   67489 system_pods.go:74] duration metric: took 5.625205ms to wait for pod list to return data ...
	I1028 18:34:15.424359   67489 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:15.427132   67489 default_sa.go:45] found service account: "default"
	I1028 18:34:15.427153   67489 default_sa.go:55] duration metric: took 2.788005ms for default service account to be created ...
	I1028 18:34:15.427161   67489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:15.479404   67489 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:15.479427   67489 system_pods.go:89] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.479433   67489 system_pods.go:89] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.479436   67489 system_pods.go:89] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.479443   67489 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.479448   67489 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.479453   67489 system_pods.go:89] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.479460   67489 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.479472   67489 system_pods.go:89] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.479477   67489 system_pods.go:89] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.479491   67489 system_pods.go:126] duration metric: took 52.324012ms to wait for k8s-apps to be running ...
	I1028 18:34:15.479502   67489 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:15.479548   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:15.493743   67489 system_svc.go:56] duration metric: took 14.233947ms WaitForService to wait for kubelet
	I1028 18:34:15.493772   67489 kubeadm.go:582] duration metric: took 9.852243286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:15.493796   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:15.677127   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:15.677149   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:15.677156   67489 node_conditions.go:105] duration metric: took 183.355591ms to run NodePressure ...
	I1028 18:34:15.677167   67489 start.go:241] waiting for startup goroutines ...
	I1028 18:34:15.677174   67489 start.go:246] waiting for cluster config update ...
	I1028 18:34:15.677183   67489 start.go:255] writing updated cluster config ...
	I1028 18:34:15.677419   67489 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:15.731157   67489 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:15.732912   67489 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-692033" cluster and "default" namespace by default
	I1028 18:34:14.588528   66600 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:14.588660   66600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:14.588749   66600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:14.588886   66600 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:14.588985   66600 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:14.589089   66600 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:14.589179   66600 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:14.589268   66600 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:14.589362   66600 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:14.589472   66600 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:14.589575   66600 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:14.589638   66600 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:14.589739   66600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:14.902456   66600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:15.107236   66600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:15.198073   66600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:15.618175   66600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:15.804761   66600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:15.805675   66600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:15.809860   66600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:15.811538   66600 out.go:235]   - Booting up control plane ...
	I1028 18:34:15.811658   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:15.811761   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:15.812969   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:15.838182   66600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:15.846044   66600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:15.846126   66600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:15.981748   66600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:15.981899   66600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:16.483112   66600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262752ms
	I1028 18:34:16.483242   66600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:21.484655   66600 kubeadm.go:310] [api-check] The API server is healthy after 5.001327308s
	I1028 18:34:21.498067   66600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:21.508713   66600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:21.537520   66600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:21.537724   66600 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-021370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:21.551416   66600 kubeadm.go:310] [bootstrap-token] Using token: c2otm2.eh2uwearn2r38epe
	I1028 18:34:21.552613   66600 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:21.552721   66600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:21.556871   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:21.563570   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:21.566336   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:21.569226   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:21.575090   66600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:21.890874   66600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:22.315363   66600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:22.892050   66600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:22.892097   66600 kubeadm.go:310] 
	I1028 18:34:22.892198   66600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:22.892214   66600 kubeadm.go:310] 
	I1028 18:34:22.892297   66600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:22.892308   66600 kubeadm.go:310] 
	I1028 18:34:22.892346   66600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:22.892457   66600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:22.892549   66600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:22.892559   66600 kubeadm.go:310] 
	I1028 18:34:22.892628   66600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:22.892643   66600 kubeadm.go:310] 
	I1028 18:34:22.892705   66600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:22.892715   66600 kubeadm.go:310] 
	I1028 18:34:22.892784   66600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:22.892851   66600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:22.892958   66600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:22.892981   66600 kubeadm.go:310] 
	I1028 18:34:22.893093   66600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:22.893197   66600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:22.893212   66600 kubeadm.go:310] 
	I1028 18:34:22.893320   66600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893460   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:22.893506   66600 kubeadm.go:310] 	--control-plane 
	I1028 18:34:22.893515   66600 kubeadm.go:310] 
	I1028 18:34:22.893622   66600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:22.893631   66600 kubeadm.go:310] 
	I1028 18:34:22.893728   66600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893886   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:22.894813   66600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:22.895022   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:34:22.895037   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:22.897376   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:22.898532   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:22.909363   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:22.930151   66600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:22.930190   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:22.930280   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-021370 minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=embed-certs-021370 minikube.k8s.io/primary=true
	I1028 18:34:22.963249   66600 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:23.216574   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:23.717592   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.217674   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.717602   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.216832   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.717673   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.217668   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.716727   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.217476   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.343171   66600 kubeadm.go:1113] duration metric: took 4.413029537s to wait for elevateKubeSystemPrivileges
	I1028 18:34:27.343201   66600 kubeadm.go:394] duration metric: took 5m1.603783417s to StartCluster
	I1028 18:34:27.343221   66600 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.343302   66600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:27.344913   66600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.345149   66600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:27.345210   66600 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:27.345282   66600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-021370"
	I1028 18:34:27.345297   66600 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-021370"
	W1028 18:34:27.345304   66600 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:27.345310   66600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-021370"
	I1028 18:34:27.345339   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345337   66600 addons.go:69] Setting metrics-server=true in profile "embed-certs-021370"
	I1028 18:34:27.345353   66600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-021370"
	I1028 18:34:27.345360   66600 addons.go:234] Setting addon metrics-server=true in "embed-certs-021370"
	W1028 18:34:27.345369   66600 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:27.345381   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:27.345396   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345742   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345788   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345794   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345798   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.346770   66600 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:27.348169   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:27.361310   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1028 18:34:27.361763   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362073   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 18:34:27.362257   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.362292   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.362550   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362640   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363049   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.363079   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.363204   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.363242   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.363425   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363610   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.363934   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1028 18:34:27.364390   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.364865   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.364885   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.365229   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.365805   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.365852   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.367292   66600 addons.go:234] Setting addon default-storageclass=true in "embed-certs-021370"
	W1028 18:34:27.367314   66600 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:27.367347   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.367738   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.367782   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.381375   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1028 18:34:27.381846   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.382429   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.382441   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.382787   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.382926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.382965   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I1028 18:34:27.383568   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.384121   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.384134   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.384530   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.384730   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.384815   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386107   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I1028 18:34:27.386306   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386435   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.386888   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.386911   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.386977   66600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:27.387284   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.387866   66600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:27.387883   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.388259   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.388628   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:27.388645   66600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:27.388658   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.390614   66600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.390634   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:27.390650   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.393252   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393734   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.393758   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.394122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.394238   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.394364   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.394640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395084   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.395110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.395383   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.395540   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.395677   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.406551   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1028 18:34:27.406907   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.407358   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.407376   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.407699   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.407891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.409287   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.409489   66600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.409502   66600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:27.409517   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.412275   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412828   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.412858   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412984   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.413162   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.413303   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.413453   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.546891   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:27.571837   66600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595105   66600 node_ready.go:49] node "embed-certs-021370" has status "Ready":"True"
	I1028 18:34:27.595127   66600 node_ready.go:38] duration metric: took 23.255834ms for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595156   66600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:27.603107   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:27.635422   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.657051   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.666085   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:27.666110   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:27.706366   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:27.706394   66600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:27.772162   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:27.772191   66600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:27.844116   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:28.411454   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411478   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411522   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411544   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411751   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.411960   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.411982   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.411991   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411998   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.412223   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.412266   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413310   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413326   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413338   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.413344   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.413569   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413584   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.420867   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.420891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.421092   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.421168   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.421169   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957337   66600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.11317187s)
	I1028 18:34:28.957385   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957395   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957696   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957715   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957725   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957733   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957957   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957970   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957988   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957990   66600 addons.go:475] Verifying addon metrics-server=true in "embed-certs-021370"
	I1028 18:34:28.959590   66600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:28.961127   66600 addons.go:510] duration metric: took 1.615922156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:29.611126   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:32.110577   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:34.610544   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:37.111319   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.111342   66600 pod_ready.go:82] duration metric: took 9.508204126s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.111351   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119547   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.119571   66600 pod_ready.go:82] duration metric: took 8.212577ms for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119581   66600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126030   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.126048   66600 pod_ready.go:82] duration metric: took 6.46043ms for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126056   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132366   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.132386   66600 pod_ready.go:82] duration metric: took 6.323715ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132394   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137151   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.137171   66600 pod_ready.go:82] duration metric: took 4.770272ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137182   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507159   66600 pod_ready.go:93] pod "kube-proxy-nrr6g" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.507180   66600 pod_ready.go:82] duration metric: took 369.991591ms for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507189   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908006   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.908030   66600 pod_ready.go:82] duration metric: took 400.834669ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908038   66600 pod_ready.go:39] duration metric: took 10.312872321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:37.908052   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:37.908098   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:37.924515   66600 api_server.go:72] duration metric: took 10.579335154s to wait for apiserver process to appear ...
	I1028 18:34:37.924552   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:37.924572   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:34:37.929438   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:34:37.930716   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:37.930742   66600 api_server.go:131] duration metric: took 6.181503ms to wait for apiserver health ...
	I1028 18:34:37.930752   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:38.113401   66600 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:38.113430   66600 system_pods.go:61] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.113435   66600 system_pods.go:61] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.113439   66600 system_pods.go:61] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.113442   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.113446   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.113449   66600 system_pods.go:61] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.113452   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.113457   66600 system_pods.go:61] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.113462   66600 system_pods.go:61] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.113468   66600 system_pods.go:74] duration metric: took 182.711396ms to wait for pod list to return data ...
	I1028 18:34:38.113475   66600 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:38.309139   66600 default_sa.go:45] found service account: "default"
	I1028 18:34:38.309170   66600 default_sa.go:55] duration metric: took 195.688587ms for default service account to be created ...
	I1028 18:34:38.309182   66600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:38.510307   66600 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:38.510336   66600 system_pods.go:89] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.510341   66600 system_pods.go:89] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.510345   66600 system_pods.go:89] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.510349   66600 system_pods.go:89] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.510352   66600 system_pods.go:89] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.510355   66600 system_pods.go:89] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.510360   66600 system_pods.go:89] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.510368   66600 system_pods.go:89] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.510376   66600 system_pods.go:89] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.510391   66600 system_pods.go:126] duration metric: took 201.199416ms to wait for k8s-apps to be running ...
	I1028 18:34:38.510403   66600 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:38.510448   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:38.526043   66600 system_svc.go:56] duration metric: took 15.628796ms WaitForService to wait for kubelet
	I1028 18:34:38.526075   66600 kubeadm.go:582] duration metric: took 11.18089878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:38.526109   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:38.707568   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:38.707594   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:38.707604   66600 node_conditions.go:105] duration metric: took 181.491056ms to run NodePressure ...
	I1028 18:34:38.707615   66600 start.go:241] waiting for startup goroutines ...
	I1028 18:34:38.707621   66600 start.go:246] waiting for cluster config update ...
	I1028 18:34:38.707631   66600 start.go:255] writing updated cluster config ...
	I1028 18:34:38.707950   66600 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:38.755355   66600 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:38.757256   66600 out.go:177] * Done! kubectl is now configured to use "embed-certs-021370" cluster and "default" namespace by default
	I1028 18:34:49.381931   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:34:49.382111   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:34:49.383570   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:34:49.383633   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:49.383732   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:49.383859   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:49.383975   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:34:49.384073   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:49.385654   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:49.385757   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:49.385847   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:49.385937   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:49.386008   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:49.386118   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:49.386214   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:49.386316   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:49.386391   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:49.386478   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:49.386597   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:49.386643   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:49.386724   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:49.386813   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:49.386891   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:49.386983   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:49.387070   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:49.387209   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:49.387330   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:49.387389   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:49.387474   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:49.389653   67149 out.go:235]   - Booting up control plane ...
	I1028 18:34:49.389760   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:49.389867   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:49.389971   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:49.390088   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:49.390228   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:34:49.390277   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:34:49.390355   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390550   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390645   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390832   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390903   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391069   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391163   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391354   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391452   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391649   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391657   67149 kubeadm.go:310] 
	I1028 18:34:49.391691   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:34:49.391743   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:34:49.391758   67149 kubeadm.go:310] 
	I1028 18:34:49.391789   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:34:49.391822   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:34:49.391908   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:34:49.391914   67149 kubeadm.go:310] 
	I1028 18:34:49.392024   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:34:49.392073   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:34:49.392133   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:34:49.392142   67149 kubeadm.go:310] 
	I1028 18:34:49.392267   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:34:49.392363   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:34:49.392380   67149 kubeadm.go:310] 
	I1028 18:34:49.392525   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:34:49.392629   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:34:49.392737   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:34:49.392830   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:34:49.392879   67149 kubeadm.go:310] 
	W1028 18:34:49.392949   67149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:34:49.392991   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:34:49.869859   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:49.884524   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:49.896293   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:49.896318   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:49.896354   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:49.907312   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:49.907364   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:49.917926   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:49.928001   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:49.928048   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:49.938687   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.949217   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:49.949268   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.959955   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:49.970105   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:49.970156   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:49.980760   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:50.212973   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:36:46.686631   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:36:46.686753   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:36:46.688224   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:36:46.688325   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:36:46.688449   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:36:46.688587   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:36:46.688726   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:36:46.688813   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:36:46.690320   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:36:46.690427   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:36:46.690524   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:36:46.690627   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:36:46.690720   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:36:46.690824   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:36:46.690897   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:36:46.690984   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:36:46.691064   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:36:46.691161   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:36:46.691253   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:36:46.691309   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:36:46.691379   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:36:46.691426   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:36:46.691471   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:36:46.691547   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:36:46.691619   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:36:46.691713   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:36:46.691814   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:36:46.691864   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:36:46.691951   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:36:46.693258   67149 out.go:235]   - Booting up control plane ...
	I1028 18:36:46.693374   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:36:46.693471   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:36:46.693566   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:36:46.693682   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:36:46.693870   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:36:46.693930   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:36:46.694023   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694253   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694343   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694527   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694614   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694798   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694894   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695053   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695119   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695315   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695324   67149 kubeadm.go:310] 
	I1028 18:36:46.695357   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:36:46.695392   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:36:46.695398   67149 kubeadm.go:310] 
	I1028 18:36:46.695427   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:36:46.695456   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:36:46.695542   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:36:46.695549   67149 kubeadm.go:310] 
	I1028 18:36:46.695665   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:36:46.695717   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:36:46.695767   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:36:46.695781   67149 kubeadm.go:310] 
	I1028 18:36:46.695921   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:36:46.696037   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:36:46.696048   67149 kubeadm.go:310] 
	I1028 18:36:46.696177   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:36:46.696285   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:36:46.696390   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:36:46.696512   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:36:46.696560   67149 kubeadm.go:310] 
	I1028 18:36:46.696579   67149 kubeadm.go:394] duration metric: took 7m56.601380499s to StartCluster
	I1028 18:36:46.696618   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:36:46.696670   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:36:46.738714   67149 cri.go:89] found id: ""
	I1028 18:36:46.738741   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.738749   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:36:46.738757   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:36:46.738822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:36:46.772906   67149 cri.go:89] found id: ""
	I1028 18:36:46.772934   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.772944   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:36:46.772951   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:36:46.773028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:36:46.808785   67149 cri.go:89] found id: ""
	I1028 18:36:46.808809   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.808819   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:36:46.808827   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:36:46.808884   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:36:46.842977   67149 cri.go:89] found id: ""
	I1028 18:36:46.843007   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.843016   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:36:46.843022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:36:46.843095   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:36:46.878121   67149 cri.go:89] found id: ""
	I1028 18:36:46.878148   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.878159   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:36:46.878166   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:36:46.878231   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:36:46.911953   67149 cri.go:89] found id: ""
	I1028 18:36:46.911977   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.911984   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:36:46.911990   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:36:46.912054   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:36:46.944291   67149 cri.go:89] found id: ""
	I1028 18:36:46.944317   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.944324   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:36:46.944329   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:36:46.944379   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:36:46.976525   67149 cri.go:89] found id: ""
	I1028 18:36:46.976554   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.976564   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:36:46.976575   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:36:46.976588   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:36:47.026517   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:36:47.026544   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:36:47.041198   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:36:47.041231   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:36:47.115650   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:36:47.115681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:36:47.115695   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:36:47.218059   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:36:47.218093   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 18:36:47.257114   67149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:36:47.257182   67149 out.go:270] * 
	W1028 18:36:47.257240   67149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.257280   67149 out.go:270] * 
	W1028 18:36:47.258088   67149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:36:47.261521   67149 out.go:201] 
	W1028 18:36:47.262707   67149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.262742   67149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:36:47.262760   67149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:36:47.264073   67149 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.115571803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141152115547415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3539a69-68bb-4cc9-83ee-6ece2554d285 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.116577248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe961949-1a76-42ba-91bc-f15d9217d3be name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.116695026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe961949-1a76-42ba-91bc-f15d9217d3be name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.116750760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fe961949-1a76-42ba-91bc-f15d9217d3be name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.152595473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=839bfe0c-6625-4025-a7a8-b748b3e7f7e5 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.152744601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=839bfe0c-6625-4025-a7a8-b748b3e7f7e5 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.154012049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=787e52bf-ab77-4d1e-b1c0-32c9220388b1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.154453256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141152154427804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=787e52bf-ab77-4d1e-b1c0-32c9220388b1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.155171682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb62dfc4-dd3e-4892-af3d-40e1553bddc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.155267287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb62dfc4-dd3e-4892-af3d-40e1553bddc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.155323290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bb62dfc4-dd3e-4892-af3d-40e1553bddc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.192135584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d952286d-af7d-41ad-ac59-a11260590f5f name=/runtime.v1.RuntimeService/Version
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.192223839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d952286d-af7d-41ad-ac59-a11260590f5f name=/runtime.v1.RuntimeService/Version
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.193139384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fcc0c36-1d72-42b9-a3ed-02407182f26d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.193542264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141152193516166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fcc0c36-1d72-42b9-a3ed-02407182f26d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.194139210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17c8a0aa-efa0-4adb-a91c-4028f9ac6fee name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.194210918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17c8a0aa-efa0-4adb-a91c-4028f9ac6fee name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.194245782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17c8a0aa-efa0-4adb-a91c-4028f9ac6fee name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.226133431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2037da9-9262-4535-aee6-0ce78200ad3f name=/runtime.v1.RuntimeService/Version
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.226261584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2037da9-9262-4535-aee6-0ce78200ad3f name=/runtime.v1.RuntimeService/Version
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.228727285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed8f63f3-703c-4017-ac12-3dfccdcf37c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.229199572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141152229178391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed8f63f3-703c-4017-ac12-3dfccdcf37c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.229873402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c292a72-3876-40d0-8ec6-f680452678b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.230006001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c292a72-3876-40d0-8ec6-f680452678b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:45:52 old-k8s-version-223868 crio[633]: time="2024-10-28 18:45:52.230044581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0c292a72-3876-40d0-8ec6-f680452678b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 18:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052154] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040854] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.948848] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.654628] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568759] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.229575] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.078716] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057084] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.217028] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.132211] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.266373] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +7.871428] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.072119] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.097659] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[Oct28 18:29] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 18:32] systemd-fstab-generator[5063]: Ignoring "noauto" option for root device
	[Oct28 18:34] systemd-fstab-generator[5342]: Ignoring "noauto" option for root device
	[  +0.070292] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:45:52 up 17 min,  0 users,  load average: 0.02, 0.04, 0.04
	Linux old-k8s-version-223868 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc0008e54d0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: net.cgoIPLookup(0xc00027d6e0, 0x48ab5d6, 0x3, 0xc0008e54d0, 0x1f)
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: created by net.cgoLookupIP
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: goroutine 121 [select]:
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000473220, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0003b4c60, 0x0, 0x0)
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0007341c0)
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 28 18:45:47 old-k8s-version-223868 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 18:45:47 old-k8s-version-223868 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 18:45:47 old-k8s-version-223868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 28 18:45:47 old-k8s-version-223868 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 18:45:47 old-k8s-version-223868 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6526]: I1028 18:45:47.886629    6526 server.go:416] Version: v1.20.0
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6526]: I1028 18:45:47.886899    6526 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6526]: I1028 18:45:47.888574    6526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6526]: W1028 18:45:47.889628    6526 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 28 18:45:47 old-k8s-version-223868 kubelet[6526]: I1028 18:45:47.890081    6526 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (245.985469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-223868" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (387s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-051152 -n no-preload-051152
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 18:49:16.379299611 +0000 UTC m=+6195.837343931
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-051152 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-051152 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.087µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-051152 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-051152 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-051152 logs -n 25: (1.364753037s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-021370            | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:48 UTC |
	| start   | -p newest-cni-724173 --memory=2200 --alsologtostderr   | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:48 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-724173             | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-724173                                   | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:49 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-724173                  | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:49 UTC | 28 Oct 24 18:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-724173 --memory=2200 --alsologtostderr   | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:49 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:49:07
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:49:07.877895   74054 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:49:07.878017   74054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:49:07.878027   74054 out.go:358] Setting ErrFile to fd 2...
	I1028 18:49:07.878032   74054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:49:07.878203   74054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:49:07.878789   74054 out.go:352] Setting JSON to false
	I1028 18:49:07.879731   74054 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9091,"bootTime":1730132257,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:49:07.879826   74054 start.go:139] virtualization: kvm guest
	I1028 18:49:07.882076   74054 out.go:177] * [newest-cni-724173] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:49:07.883538   74054 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:49:07.883539   74054 notify.go:220] Checking for updates...
	I1028 18:49:07.886478   74054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:49:07.887845   74054 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:49:07.889180   74054 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:49:07.890645   74054 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:49:07.891912   74054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:49:07.893732   74054 config.go:182] Loaded profile config "newest-cni-724173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:07.894368   74054 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:49:07.894428   74054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:49:07.910198   74054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I1028 18:49:07.910671   74054 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:49:07.911195   74054 main.go:141] libmachine: Using API Version  1
	I1028 18:49:07.911221   74054 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:49:07.911661   74054 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:49:07.911840   74054 main.go:141] libmachine: (newest-cni-724173) Calling .DriverName
	I1028 18:49:07.912106   74054 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:49:07.912399   74054 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:49:07.912449   74054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:49:07.926831   74054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I1028 18:49:07.927168   74054 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:49:07.927587   74054 main.go:141] libmachine: Using API Version  1
	I1028 18:49:07.927602   74054 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:49:07.927894   74054 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:49:07.928097   74054 main.go:141] libmachine: (newest-cni-724173) Calling .DriverName
	I1028 18:49:07.961671   74054 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:49:07.962924   74054 start.go:297] selected driver: kvm2
	I1028 18:49:07.962944   74054 start.go:901] validating driver "kvm2" against &{Name:newest-cni-724173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-724173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:49:07.963075   74054 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:49:07.963815   74054 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:49:07.963885   74054 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:49:07.978959   74054 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:49:07.979342   74054 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 18:49:07.979367   74054 cni.go:84] Creating CNI manager for ""
	I1028 18:49:07.979414   74054 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:49:07.979446   74054 start.go:340] cluster config:
	{Name:newest-cni-724173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-724173 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:49:07.979541   74054 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:49:07.981198   74054 out.go:177] * Starting "newest-cni-724173" primary control-plane node in "newest-cni-724173" cluster
	I1028 18:49:07.982500   74054 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:49:07.982527   74054 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:49:07.982535   74054 cache.go:56] Caching tarball of preloaded images
	I1028 18:49:07.982597   74054 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:49:07.982607   74054 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:49:07.982718   74054 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/newest-cni-724173/config.json ...
	I1028 18:49:07.982936   74054 start.go:360] acquireMachinesLock for newest-cni-724173: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:49:07.982979   74054 start.go:364] duration metric: took 23.891µs to acquireMachinesLock for "newest-cni-724173"
	I1028 18:49:07.982993   74054 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:49:07.982998   74054 fix.go:54] fixHost starting: 
	I1028 18:49:07.983247   74054 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:49:07.983278   74054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:49:07.997042   74054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I1028 18:49:07.997459   74054 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:49:07.997876   74054 main.go:141] libmachine: Using API Version  1
	I1028 18:49:07.997895   74054 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:49:07.998182   74054 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:49:07.998331   74054 main.go:141] libmachine: (newest-cni-724173) Calling .DriverName
	I1028 18:49:07.998488   74054 main.go:141] libmachine: (newest-cni-724173) Calling .GetState
	I1028 18:49:07.999934   74054 fix.go:112] recreateIfNeeded on newest-cni-724173: state=Stopped err=<nil>
	I1028 18:49:07.999953   74054 main.go:141] libmachine: (newest-cni-724173) Calling .DriverName
	W1028 18:49:08.000097   74054 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:49:08.001984   74054 out.go:177] * Restarting existing kvm2 VM for "newest-cni-724173" ...
	I1028 18:49:08.003260   74054 main.go:141] libmachine: (newest-cni-724173) Calling .Start
	I1028 18:49:08.003424   74054 main.go:141] libmachine: (newest-cni-724173) Ensuring networks are active...
	I1028 18:49:08.004076   74054 main.go:141] libmachine: (newest-cni-724173) Ensuring network default is active
	I1028 18:49:08.004429   74054 main.go:141] libmachine: (newest-cni-724173) Ensuring network mk-newest-cni-724173 is active
	I1028 18:49:08.004868   74054 main.go:141] libmachine: (newest-cni-724173) Getting domain xml...
	I1028 18:49:08.005583   74054 main.go:141] libmachine: (newest-cni-724173) Creating domain...
	I1028 18:49:09.250809   74054 main.go:141] libmachine: (newest-cni-724173) Waiting to get IP...
	I1028 18:49:09.251568   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:09.252012   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:09.252056   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:09.251979   74089 retry.go:31] will retry after 240.348629ms: waiting for machine to come up
	I1028 18:49:09.494549   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:09.495011   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:09.495042   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:09.494955   74089 retry.go:31] will retry after 312.376104ms: waiting for machine to come up
	I1028 18:49:09.808555   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:09.809096   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:09.809127   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:09.809038   74089 retry.go:31] will retry after 447.145045ms: waiting for machine to come up
	I1028 18:49:10.257530   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:10.258055   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:10.258110   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:10.257993   74089 retry.go:31] will retry after 417.146373ms: waiting for machine to come up
	I1028 18:49:10.676687   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:10.677115   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:10.677143   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:10.677072   74089 retry.go:31] will retry after 556.324492ms: waiting for machine to come up
	I1028 18:49:11.234769   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:11.235137   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:11.235162   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:11.235085   74089 retry.go:31] will retry after 627.639843ms: waiting for machine to come up
	I1028 18:49:11.864699   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:11.865181   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:11.865203   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:11.865163   74089 retry.go:31] will retry after 1.104921607s: waiting for machine to come up
	
	
	==> CRI-O <==
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.062785636Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:58ceac177b577efe9e5cd1462de71c879e5dad5da02c2f034eb4264418ae16a0,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-9rh4q,Uid:24f7156f-c19f-4d0b-8d23-c88e0fe571de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140419406203386,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-9rh4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f7156f-c19f-4d0b-8d23-c88e0fe571de,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:33:39.098106017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3fb18822-fcad-4041-9ac9-644b101d8ca4,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140419226990815,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T18:33:38.907597163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sx5qg,Uid:e687b4d1-ab2e-4084-b1b0-f15b5e7817af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140418397297587,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:33:38.088595605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mxhp2,Uid:4aec7fb0-910f-48c1-
ad4b-8bb21fd7e24d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140418375268789,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:33:38.058467185Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&PodSandboxMetadata{Name:kube-proxy-28qht,Uid:710be347-bd18-4873-be61-1ccfd2088686,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140418145309871,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T18:33:37.820318861Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-051152,Uid:4db29c0360ebe76903f38dd64ffdd6ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140407229877120,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4db29c0360ebe76903f38dd64ffdd6ae,kubernetes.io/config.seen: 2024-10-28T18:33:26.772546378Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&PodSandboxMetadata{Name:kube-controller-m
anager-no-preload-051152,Uid:15a831305967cfb08d88e33aeda9a2d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140407226235371,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15a831305967cfb08d88e33aeda9a2d8,kubernetes.io/config.seen: 2024-10-28T18:33:26.772544728Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-051152,Uid:869aac1776457cc65d6cf9f76d924ca9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730140407210581645,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.78:2379,kubernetes.io/config.hash: 869aac1776457cc65d6cf9f76d924ca9,kubernetes.io/config.seen: 2024-10-28T18:33:26.772530611Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-051152,Uid:ed4c9e4554c2958c7503cae4439988b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730140407208372328,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.78:8443,
kubernetes.io/config.hash: ed4c9e4554c2958c7503cae4439988b7,kubernetes.io/config.seen: 2024-10-28T18:33:26.772542799Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-051152,Uid:ed4c9e4554c2958c7503cae4439988b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1730140119351915964,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.78:8443,kubernetes.io/config.hash: ed4c9e4554c2958c7503cae4439988b7,kubernetes.io/config.seen: 2024-10-28T18:28:38.822560618Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/intercep
tors.go:74" id=5724fc8d-3b69-4a65-b90f-4b419823112a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.063769804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1c91de0-a219-4f9b-b004-04a05278b859 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.063827079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1c91de0-a219-4f9b-b004-04a05278b859 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.064024880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1c91de0-a219-4f9b-b004-04a05278b859 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.068896042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=170ea9e0-b0e7-4edb-a182-95d942954e4f name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.068980982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=170ea9e0-b0e7-4edb-a182-95d942954e4f name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.069948341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d55c61f-ac6a-4a7c-a58c-83de5ea2eefb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.070567862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141357070535944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d55c61f-ac6a-4a7c-a58c-83de5ea2eefb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.071461147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a02c8c9-6a86-430e-94b7-5903494ee571 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.071512138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a02c8c9-6a86-430e-94b7-5903494ee571 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.071748983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a02c8c9-6a86-430e-94b7-5903494ee571 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.123765591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c3d0f4e-91fa-4196-9e99-4c2f7e819297 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.123889160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c3d0f4e-91fa-4196-9e99-4c2f7e819297 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.125107426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ca105c3-392e-4892-aa86-95ee3dcf6937 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.125639166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141357125616313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ca105c3-392e-4892-aa86-95ee3dcf6937 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.126421804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b80679fd-a8be-4fd4-8d0b-11f32334ce54 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.126472519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b80679fd-a8be-4fd4-8d0b-11f32334ce54 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.126738406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b80679fd-a8be-4fd4-8d0b-11f32334ce54 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.173283748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbadf643-b9f5-4b13-95d5-122b3e923c3e name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.173366791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbadf643-b9f5-4b13-95d5-122b3e923c3e name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.175704697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=019ede6d-19c9-4499-8129-78de09c7caa5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.176069053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141357176045185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=019ede6d-19c9-4499-8129-78de09c7caa5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.178430115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=181a29c2-4275-4744-87a5-2123b42faba1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.178783923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=181a29c2-4275-4744-87a5-2123b42faba1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:17 no-preload-051152 crio[707]: time="2024-10-28 18:49:17.179285327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f,PodSandboxId:ec6d303457c4803e4cf71b0bad43cde9a226d67513d8f396655281eb4fc3196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140419788426660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb18822-fcad-4041-9ac9-644b101d8ca4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545,PodSandboxId:086ada1a631f54fa76425c1d0cf6af9d785b125f5dfab64684bb1ff972588186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419314865393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx5qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e687b4d1-ab2e-4084-b1b0-f15b5e7817af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5,PodSandboxId:9b8a560aaa473f4aaadb1830a839478852e883a8b723de9d77441e965fc1eec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140419201256059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mxhp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
ec7fb0-910f-48c1-ad4b-8bb21fd7e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772,PodSandboxId:38d15fad63b8633e3326d82ca6da883af6ca2ba39dd9bb6b62a96551d9f57c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730140418442848016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28qht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 710be347-bd18-4873-be61-1ccfd2088686,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7,PodSandboxId:4f0c484a1a87197a4af44c85cf796e2e35de65cbb6860507d065d66b12271e30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140407475999033,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4db29c0360ebe76903f38dd64ffdd6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb,PodSandboxId:f9c0ed8466dbbd6e3b37e9af6cd01af800227046b6b21248fae039caa116c08e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140407462470234,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a831305967cfb08d88e33aeda9a2d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402,PodSandboxId:459669cfa829b3dd2e8f669b1a301e2d1b7bfafb8123c49d7cd7f03e28368667,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140407433850447,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869aac1776457cc65d6cf9f76d924ca9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4,PodSandboxId:a3f4e0abdf259e619416898d791e4d6c66e2dc439d2301c2a88f9ae07c20c9d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140407360997868,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab,PodSandboxId:7c8e1b62281f6dcc9f5f796305d35967a2d402a090589da3db2e98af083ada5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140119595403491,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-051152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4c9e4554c2958c7503cae4439988b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=181a29c2-4275-4744-87a5-2123b42faba1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a7490abcdc75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   ec6d303457c48       storage-provisioner
	df2a4392d30b3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   086ada1a631f5       coredns-7c65d6cfc9-sx5qg
	57afc8bfca048       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   9b8a560aaa473       coredns-7c65d6cfc9-mxhp2
	f0c7e3bac7dcc       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 minutes ago      Running             kube-proxy                0                   38d15fad63b86       kube-proxy-28qht
	1df93a4fd5298       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   15 minutes ago      Running             kube-scheduler            2                   4f0c484a1a871       kube-scheduler-no-preload-051152
	5fad3c448c620       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   15 minutes ago      Running             kube-controller-manager   2                   f9c0ed8466dbb       kube-controller-manager-no-preload-051152
	6d2229645331e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   459669cfa829b       etcd-no-preload-051152
	d68dd6a8f4e4d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   15 minutes ago      Running             kube-apiserver            2                   a3f4e0abdf259       kube-apiserver-no-preload-051152
	9e9a4b33605aa       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   20 minutes ago      Exited              kube-apiserver            1                   7c8e1b62281f6       kube-apiserver-no-preload-051152
	
	
	==> coredns [57afc8bfca0481dfda2a79dbe261ae16a0f5189d81e23729a2c9ce51a1cb37b5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [df2a4392d30b33fb6be942c62fe450a86ad5e874204dea437d4a1bfe10d04545] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-051152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-051152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=no-preload-051152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:33:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-051152
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:49:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:49:01 +0000   Mon, 28 Oct 2024 18:33:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:49:01 +0000   Mon, 28 Oct 2024 18:33:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:49:01 +0000   Mon, 28 Oct 2024 18:33:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:49:01 +0000   Mon, 28 Oct 2024 18:33:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.78
	  Hostname:    no-preload-051152
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d1abce9a1694ead8a0537b8e0e44c6e
	  System UUID:                9d1abce9-a169-4ead-8a05-37b8e0e44c6e
	  Boot ID:                    da7132f0-f8af-4057-9464-63b6b5bf9be7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-mxhp2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-sx5qg                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-no-preload-051152                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-no-preload-051152             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-no-preload-051152    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-28qht                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-no-preload-051152             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-9rh4q              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-051152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-051152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-051152 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-051152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-051152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-051152 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-051152 event: Registered Node no-preload-051152 in Controller
	
	
	==> dmesg <==
	[  +0.040045] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.853028] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.435450] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.443590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.645745] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060364] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.200444] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.113118] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.272482] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.752223] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.058801] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.832003] systemd-fstab-generator[1422]: Ignoring "noauto" option for root device
	[  +4.057958] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.057504] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.266517] kauditd_printk_skb: 25 callbacks suppressed
	[Oct28 18:33] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.460467] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +4.548056] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.518414] systemd-fstab-generator[3445]: Ignoring "noauto" option for root device
	[  +5.340523] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.111189] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.550282] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6d2229645331e597ffc96b1eb30ab41efaa5604bcbd9bc2da2f29ac1c1179402] <==
	{"level":"info","ts":"2024-10-28T18:33:28.198851Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.199910Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:33:28.200827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T18:33:28.208247Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"96f5678f0acb0355","local-member-id":"9fc63996407e1dc3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.208360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.208397Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:33:28.208409Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T18:33:28.208916Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:33:28.209632Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.78:2379"}
	{"level":"info","ts":"2024-10-28T18:33:28.230003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T18:33:28.230075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T18:43:28.806580Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-10-28T18:43:28.816984Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":715,"took":"10.015655ms","hash":2764770102,"current-db-size-bytes":2330624,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2330624,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-10-28T18:43:28.817044Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2764770102,"revision":715,"compact-revision":-1}
	{"level":"info","ts":"2024-10-28T18:48:28.816544Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":958}
	{"level":"info","ts":"2024-10-28T18:48:28.820763Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":958,"took":"3.365392ms","hash":2097382333,"current-db-size-bytes":2330624,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-10-28T18:48:28.820850Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2097382333,"revision":958,"compact-revision":715}
	{"level":"warn","ts":"2024-10-28T18:48:42.237568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.668849ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2144719288526178904 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.78\" mod_revision:1205 > success:<request_put:<key:\"/registry/masterleases/192.168.61.78\" value_size:66 lease:2144719288526178902 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.78\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T18:48:42.238345Z","caller":"traceutil/trace.go:171","msg":"trace[200304739] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"187.021102ms","start":"2024-10-28T18:48:42.051283Z","end":"2024-10-28T18:48:42.238304Z","steps":["trace[200304739] 'process raft request'  (duration: 45.386154ms)","trace[200304739] 'compare'  (duration: 139.50584ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T18:48:42.491552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.528664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:48:42.491742Z","caller":"traceutil/trace.go:171","msg":"trace[348242833] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:1213; }","duration":"150.729326ms","start":"2024-10-28T18:48:42.340995Z","end":"2024-10-28T18:48:42.491724Z","steps":["trace[348242833] 'count revisions from in-memory index tree'  (duration: 150.416117ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:48:42.491746Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.591224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:48:42.492009Z","caller":"traceutil/trace.go:171","msg":"trace[1935929333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1213; }","duration":"130.87411ms","start":"2024-10-28T18:48:42.361127Z","end":"2024-10-28T18:48:42.492001Z","steps":["trace[1935929333] 'range keys from in-memory index tree'  (duration: 130.454668ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T18:48:42.811325Z","caller":"traceutil/trace.go:171","msg":"trace[1600128211] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"312.656353ms","start":"2024-10-28T18:48:42.498650Z","end":"2024-10-28T18:48:42.811306Z","steps":["trace[1600128211] 'process raft request'  (duration: 312.421212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:48:42.811558Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:48:42.498633Z","time spent":"312.841094ms","remote":"127.0.0.1:39556","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1212 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 18:49:17 up 21 min,  0 users,  load average: 0.00, 0.06, 0.13
	Linux no-preload-051152 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9e9a4b33605aac586d6fa63990cc84193e2afd1ce540bade220b4cf2ffaa63ab] <==
	W1028 18:33:19.667647       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.667749       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.684396       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.711116       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.730489       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.742195       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.753914       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.791593       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.879301       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.883731       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.963455       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.984302       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:19.987658       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:20.026963       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:20.117528       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:20.274561       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:23.125712       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:23.563563       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.298603       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.319531       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.427358       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.541580       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.567411       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.572744       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:24.726344       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d68dd6a8f4e4d2687a4520155ca9fbacc0dd52548b79ca52ac7ed6de7e86aaa4] <==
	I1028 18:44:31.176878       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:44:31.177982       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:46:31.177933       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:46:31.178050       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 18:46:31.178127       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:46:31.178260       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:46:31.180089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:46:31.180117       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:48:30.178616       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:48:30.178823       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 18:48:31.180322       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:48:31.180457       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:48:31.180577       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 18:48:31.180617       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:48:31.181745       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:48:31.181809       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5fad3c448c6207d1f613139ae917779a75322b03394d4be7c83f1b1742475ccb] <==
	E1028 18:44:07.286962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:44:07.737393       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:44:37.294110       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:44:37.746086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:44:47.610542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="229.01µs"
	I1028 18:44:59.609631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="83.245µs"
	E1028 18:45:07.300907       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:45:07.753828       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:45:37.308550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:45:37.761282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:46:07.314469       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:46:07.768728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:46:37.323011       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:46:37.777790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:47:07.330640       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:47:07.785715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:47:37.337700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:47:37.794637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:48:07.345655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:48:07.810398       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:48:37.353407       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:48:37.821071       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:49:01.405464       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-051152"
	E1028 18:49:07.361021       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:49:07.831627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f0c7e3bac7dccf412bbc66fd2f699d368eaadabe3c3dd0559f2e6217256a7772] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:33:38.942498       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:33:38.959672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.78"]
	E1028 18:33:38.959741       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:33:39.464391       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:33:39.464444       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:33:39.464498       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:33:39.642368       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:33:39.646457       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:33:39.648773       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:33:39.655132       1 config.go:199] "Starting service config controller"
	I1028 18:33:39.655326       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:33:39.655444       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:33:39.655533       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:33:39.666686       1 config.go:328] "Starting node config controller"
	I1028 18:33:39.666703       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:33:39.757660       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 18:33:39.757704       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:33:39.776754       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1df93a4fd5298b4fd6122fe4f588b51d6ef318c3429db65b7de5860ac1b554d7] <==
	W1028 18:33:30.270624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:30.271781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 18:33:30.271832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 18:33:30.271882       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:33:30.271933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:30.270983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:33:30.271998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.098324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:31.098437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.222766       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 18:33:31.222949       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 18:33:31.262709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:31.262809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.359359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 18:33:31.359480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.378046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:33:31.378103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.399813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:31.399939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:31.480112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:33:31.480266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1028 18:33:33.049737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 18:48:04 no-preload-051152 kubelet[3452]: E1028 18:48:04.594113    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:48:12 no-preload-051152 kubelet[3452]: E1028 18:48:12.866264    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141292865902702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:12 no-preload-051152 kubelet[3452]: E1028 18:48:12.866852    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141292865902702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:19 no-preload-051152 kubelet[3452]: E1028 18:48:19.594838    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:48:22 no-preload-051152 kubelet[3452]: E1028 18:48:22.868341    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141302867903035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:22 no-preload-051152 kubelet[3452]: E1028 18:48:22.868381    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141302867903035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:32 no-preload-051152 kubelet[3452]: E1028 18:48:32.631479    3452 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 18:48:32 no-preload-051152 kubelet[3452]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 18:48:32 no-preload-051152 kubelet[3452]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 18:48:32 no-preload-051152 kubelet[3452]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 18:48:32 no-preload-051152 kubelet[3452]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 18:48:32 no-preload-051152 kubelet[3452]: E1028 18:48:32.871429    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141312870895545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:32 no-preload-051152 kubelet[3452]: E1028 18:48:32.871457    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141312870895545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:33 no-preload-051152 kubelet[3452]: E1028 18:48:33.594849    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:48:42 no-preload-051152 kubelet[3452]: E1028 18:48:42.872903    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141322872434336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:42 no-preload-051152 kubelet[3452]: E1028 18:48:42.873395    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141322872434336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:47 no-preload-051152 kubelet[3452]: E1028 18:48:47.593830    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:48:52 no-preload-051152 kubelet[3452]: E1028 18:48:52.874948    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141332874499781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:52 no-preload-051152 kubelet[3452]: E1028 18:48:52.875307    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141332874499781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:02 no-preload-051152 kubelet[3452]: E1028 18:49:02.594851    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	Oct 28 18:49:02 no-preload-051152 kubelet[3452]: E1028 18:49:02.876613    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141342876207686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:02 no-preload-051152 kubelet[3452]: E1028 18:49:02.876655    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141342876207686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:12 no-preload-051152 kubelet[3452]: E1028 18:49:12.878884    3452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141352878584670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:12 no-preload-051152 kubelet[3452]: E1028 18:49:12.878934    3452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141352878584670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:15 no-preload-051152 kubelet[3452]: E1028 18:49:15.593932    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9rh4q" podUID="24f7156f-c19f-4d0b-8d23-c88e0fe571de"
	
	
	==> storage-provisioner [9a7490abcdc75f0abeeb5dcab045990fb91a730f4d00f621eecbf17d886dc28f] <==
	I1028 18:33:39.882785       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 18:33:39.898522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 18:33:39.898584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 18:33:39.912227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 18:33:39.912408       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-051152_20aeb7d9-1f7f-478f-bfff-47100469eed1!
	I1028 18:33:39.915822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71223ce6-ec64-472b-bde7-65690fd6dd67", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-051152_20aeb7d9-1f7f-478f-bfff-47100469eed1 became leader
	I1028 18:33:40.013231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-051152_20aeb7d9-1f7f-478f-bfff-47100469eed1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-051152 -n no-preload-051152
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-051152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9rh4q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-051152 describe pod metrics-server-6867b74b74-9rh4q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-051152 describe pod metrics-server-6867b74b74-9rh4q: exit status 1 (66.573305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9rh4q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-051152 describe pod metrics-server-6867b74b74-9rh4q: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (387.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (484.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 18:51:21.441435136 +0000 UTC m=+6320.899479452
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-692033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.035µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-692033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-692033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-692033 logs -n 25: (1.56665899s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |   Profile   |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-457876 sudo cat                              | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | /etc/hosts                                           |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo cat                              | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | /etc/resolv.conf                                     |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo crictl                           | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | pods                                                 |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo crictl ps                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | --all                                                |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo find                             | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | /etc/cni -type f -exec sh -c                         |             |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo ip a s                           | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	| ssh     | -p auto-457876 sudo ip r s                           | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	| ssh     | -p auto-457876 sudo                                  | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | iptables-save                                        |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo iptables                         | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | -t nat -L -n -v                                      |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | status kubelet --all --full                          |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | cat kubelet --no-pager                               |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo journalctl                       | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | -xeu kubelet --all --full                            |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo cat                              | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo cat                              | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | /var/lib/kubelet/config.yaml                         |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC |                     |
	|         | status docker --all --full                           |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | cat docker --no-pager                                |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo cat                              | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | /etc/docker/daemon.json                              |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo docker                           | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC |                     |
	|         | system info                                          |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC |                     |
	|         | status cri-docker --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | cat cri-docker --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo cat                              | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo cat                              | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo                                  | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC | 28 Oct 24 18:51 UTC |
	|         | cri-dockerd --version                                |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC |                     |
	|         | status containerd --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-457876 sudo systemctl                        | auto-457876 | jenkins | v1.34.0 | 28 Oct 24 18:51 UTC |                     |
	|         | cat containerd --no-pager                            |             |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:49:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:49:49.406122   75305 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:49:49.406215   75305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:49:49.406222   75305 out.go:358] Setting ErrFile to fd 2...
	I1028 18:49:49.406225   75305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:49:49.406380   75305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:49:49.406913   75305 out.go:352] Setting JSON to false
	I1028 18:49:49.407753   75305 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9132,"bootTime":1730132257,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:49:49.407838   75305 start.go:139] virtualization: kvm guest
	I1028 18:49:49.409890   75305 out.go:177] * [flannel-457876] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:49:49.411085   75305 notify.go:220] Checking for updates...
	I1028 18:49:49.411091   75305 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:49:49.412255   75305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:49:49.413532   75305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:49:49.414642   75305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:49:49.415680   75305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:49:49.416980   75305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:49:49.418559   75305 config.go:182] Loaded profile config "auto-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:49.418649   75305 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:49.418724   75305 config.go:182] Loaded profile config "kindnet-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:49.418802   75305 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:49:49.453603   75305 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 18:49:49.454695   75305 start.go:297] selected driver: kvm2
	I1028 18:49:49.454707   75305 start.go:901] validating driver "kvm2" against <nil>
	I1028 18:49:49.454720   75305 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:49:49.455443   75305 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:49:49.455524   75305 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:49:49.469823   75305 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:49:49.469861   75305 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 18:49:49.470077   75305 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:49:49.470111   75305 cni.go:84] Creating CNI manager for "flannel"
	I1028 18:49:49.470116   75305 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1028 18:49:49.470155   75305 start.go:340] cluster config:
	{Name:flannel-457876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:49:49.470238   75305 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:49:49.471664   75305 out.go:177] * Starting "flannel-457876" primary control-plane node in "flannel-457876" cluster
	I1028 18:49:53.477060   74640 start.go:364] duration metric: took 28.128189112s to acquireMachinesLock for "kindnet-457876"
	I1028 18:49:53.477126   74640 start.go:93] Provisioning new machine with config: &{Name:kindnet-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:kindnet-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:49:53.477250   74640 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 18:49:52.078119   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.078678   74377 main.go:141] libmachine: (auto-457876) Found IP for machine: 192.168.50.36
	I1028 18:49:52.078710   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has current primary IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.078716   74377 main.go:141] libmachine: (auto-457876) Reserving static IP address...
	I1028 18:49:52.079096   74377 main.go:141] libmachine: (auto-457876) DBG | unable to find host DHCP lease matching {name: "auto-457876", mac: "52:54:00:f2:3a:e8", ip: "192.168.50.36"} in network mk-auto-457876
	I1028 18:49:52.156974   74377 main.go:141] libmachine: (auto-457876) DBG | Getting to WaitForSSH function...
	I1028 18:49:52.156998   74377 main.go:141] libmachine: (auto-457876) Reserved static IP address: 192.168.50.36
	I1028 18:49:52.157009   74377 main.go:141] libmachine: (auto-457876) Waiting for SSH to be available...
	I1028 18:49:52.159423   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.159778   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.159805   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.159914   74377 main.go:141] libmachine: (auto-457876) DBG | Using SSH client type: external
	I1028 18:49:52.159934   74377 main.go:141] libmachine: (auto-457876) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa (-rw-------)
	I1028 18:49:52.159978   74377 main.go:141] libmachine: (auto-457876) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:49:52.159991   74377 main.go:141] libmachine: (auto-457876) DBG | About to run SSH command:
	I1028 18:49:52.160004   74377 main.go:141] libmachine: (auto-457876) DBG | exit 0
	I1028 18:49:52.284290   74377 main.go:141] libmachine: (auto-457876) DBG | SSH cmd err, output: <nil>: 
	I1028 18:49:52.284578   74377 main.go:141] libmachine: (auto-457876) KVM machine creation complete!
	I1028 18:49:52.284869   74377 main.go:141] libmachine: (auto-457876) Calling .GetConfigRaw
	I1028 18:49:52.285465   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:49:52.285638   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:49:52.285777   74377 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 18:49:52.285792   74377 main.go:141] libmachine: (auto-457876) Calling .GetState
	I1028 18:49:52.286984   74377 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 18:49:52.286996   74377 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 18:49:52.287002   74377 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 18:49:52.287007   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:52.289080   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.289450   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.289479   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.289662   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:52.289839   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.289984   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.290125   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:52.290282   74377 main.go:141] libmachine: Using SSH client type: native
	I1028 18:49:52.290519   74377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1028 18:49:52.290532   74377 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 18:49:52.395664   74377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:49:52.395686   74377 main.go:141] libmachine: Detecting the provisioner...
	I1028 18:49:52.395693   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:52.398036   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.398478   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.398504   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.398677   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:52.398862   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.398996   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.399089   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:52.399268   74377 main.go:141] libmachine: Using SSH client type: native
	I1028 18:49:52.399421   74377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1028 18:49:52.399430   74377 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 18:49:52.504928   74377 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 18:49:52.504990   74377 main.go:141] libmachine: found compatible host: buildroot
	I1028 18:49:52.504996   74377 main.go:141] libmachine: Provisioning with buildroot...
	I1028 18:49:52.505004   74377 main.go:141] libmachine: (auto-457876) Calling .GetMachineName
	I1028 18:49:52.505277   74377 buildroot.go:166] provisioning hostname "auto-457876"
	I1028 18:49:52.505309   74377 main.go:141] libmachine: (auto-457876) Calling .GetMachineName
	I1028 18:49:52.505522   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:52.507931   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.508299   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.508335   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.508440   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:52.508636   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.508794   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.508911   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:52.509076   74377 main.go:141] libmachine: Using SSH client type: native
	I1028 18:49:52.509277   74377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1028 18:49:52.509289   74377 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-457876 && echo "auto-457876" | sudo tee /etc/hostname
	I1028 18:49:52.626241   74377 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-457876
	
	I1028 18:49:52.626285   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:52.628865   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.629189   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.629216   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.629400   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:52.629572   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.629739   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.629871   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:52.630026   74377 main.go:141] libmachine: Using SSH client type: native
	I1028 18:49:52.630201   74377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1028 18:49:52.630216   74377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-457876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-457876/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-457876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:49:52.740756   74377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:49:52.740785   74377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:49:52.740829   74377 buildroot.go:174] setting up certificates
	I1028 18:49:52.740844   74377 provision.go:84] configureAuth start
	I1028 18:49:52.740861   74377 main.go:141] libmachine: (auto-457876) Calling .GetMachineName
	I1028 18:49:52.741129   74377 main.go:141] libmachine: (auto-457876) Calling .GetIP
	I1028 18:49:52.743552   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.743864   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.743905   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.743988   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:52.745995   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.746306   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.746329   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.746443   74377 provision.go:143] copyHostCerts
	I1028 18:49:52.746513   74377 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:49:52.746527   74377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:49:52.746580   74377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:49:52.746660   74377 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:49:52.746672   74377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:49:52.746698   74377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:49:52.746769   74377 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:49:52.746780   74377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:49:52.746807   74377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:49:52.746871   74377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.auto-457876 san=[127.0.0.1 192.168.50.36 auto-457876 localhost minikube]
	I1028 18:49:52.857255   74377 provision.go:177] copyRemoteCerts
	I1028 18:49:52.857312   74377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:49:52.857341   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:52.860025   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.860346   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:52.860388   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:52.860534   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:52.860697   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:52.860831   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:52.860928   74377 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa Username:docker}
	I1028 18:49:52.942917   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:49:52.966819   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1028 18:49:52.989857   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:49:53.013083   74377 provision.go:87] duration metric: took 272.224787ms to configureAuth
	I1028 18:49:53.013109   74377 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:49:53.013248   74377 config.go:182] Loaded profile config "auto-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:53.013328   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:53.015824   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.016159   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.016186   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.016367   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:53.016556   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:53.016708   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:53.016828   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:53.016971   74377 main.go:141] libmachine: Using SSH client type: native
	I1028 18:49:53.017127   74377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1028 18:49:53.017146   74377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:49:53.240841   74377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:49:53.240872   74377 main.go:141] libmachine: Checking connection to Docker...
	I1028 18:49:53.240881   74377 main.go:141] libmachine: (auto-457876) Calling .GetURL
	I1028 18:49:53.242011   74377 main.go:141] libmachine: (auto-457876) DBG | Using libvirt version 6000000
	I1028 18:49:53.244185   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.244525   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.244555   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.244675   74377 main.go:141] libmachine: Docker is up and running!
	I1028 18:49:53.244688   74377 main.go:141] libmachine: Reticulating splines...
	I1028 18:49:53.244694   74377 client.go:171] duration metric: took 24.765875968s to LocalClient.Create
	I1028 18:49:53.244718   74377 start.go:167] duration metric: took 24.765974167s to libmachine.API.Create "auto-457876"
	I1028 18:49:53.244727   74377 start.go:293] postStartSetup for "auto-457876" (driver="kvm2")
	I1028 18:49:53.244736   74377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:49:53.244751   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:49:53.244980   74377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:49:53.245002   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:53.246917   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.247205   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.247231   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.247339   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:53.247502   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:53.247619   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:53.247730   74377 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa Username:docker}
	I1028 18:49:53.330165   74377 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:49:53.334177   74377 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:49:53.334202   74377 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:49:53.334273   74377 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:49:53.334372   74377 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:49:53.334467   74377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:49:53.343391   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:49:53.366557   74377 start.go:296] duration metric: took 121.818943ms for postStartSetup
	I1028 18:49:53.366611   74377 main.go:141] libmachine: (auto-457876) Calling .GetConfigRaw
	I1028 18:49:53.367152   74377 main.go:141] libmachine: (auto-457876) Calling .GetIP
	I1028 18:49:53.369505   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.369835   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.369858   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.370099   74377 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/config.json ...
	I1028 18:49:53.370265   74377 start.go:128] duration metric: took 24.912880326s to createHost
	I1028 18:49:53.370294   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:53.372189   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.372489   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.372515   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.372641   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:53.372815   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:53.372968   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:53.373102   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:53.373261   74377 main.go:141] libmachine: Using SSH client type: native
	I1028 18:49:53.373469   74377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1028 18:49:53.373480   74377 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:49:53.476894   74377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730141393.449645467
	
	I1028 18:49:53.476916   74377 fix.go:216] guest clock: 1730141393.449645467
	I1028 18:49:53.476939   74377 fix.go:229] Guest: 2024-10-28 18:49:53.449645467 +0000 UTC Remote: 2024-10-28 18:49:53.370276791 +0000 UTC m=+34.060397340 (delta=79.368676ms)
	I1028 18:49:53.476965   74377 fix.go:200] guest clock delta is within tolerance: 79.368676ms
	I1028 18:49:53.476972   74377 start.go:83] releasing machines lock for "auto-457876", held for 25.019722537s
	I1028 18:49:53.477001   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:49:53.477267   74377 main.go:141] libmachine: (auto-457876) Calling .GetIP
	I1028 18:49:53.479805   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.480134   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.480157   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.480282   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:49:53.480731   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:49:53.480876   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:49:53.480975   74377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:49:53.481012   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:53.481074   74377 ssh_runner.go:195] Run: cat /version.json
	I1028 18:49:53.481095   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:49:53.483552   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.483784   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.483857   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.483881   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.484027   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:53.484180   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:53.484260   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:53.484288   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:53.484312   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:53.484453   74377 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa Username:docker}
	I1028 18:49:53.484517   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:49:53.484661   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:49:53.484801   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:49:53.484919   74377 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa Username:docker}
	I1028 18:49:53.579783   74377 ssh_runner.go:195] Run: systemctl --version
	I1028 18:49:53.585806   74377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:49:53.754176   74377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:49:53.762464   74377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:49:53.762535   74377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:49:53.786610   74377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:49:53.786637   74377 start.go:495] detecting cgroup driver to use...
	I1028 18:49:53.786700   74377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:49:53.805261   74377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:49:53.819569   74377 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:49:53.819620   74377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:49:53.833620   74377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:49:53.847552   74377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:49:53.960089   74377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:49:54.115553   74377 docker.go:233] disabling docker service ...
	I1028 18:49:54.115629   74377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:49:54.130231   74377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:49:54.143578   74377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:49:54.278057   74377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:49:49.472619   75305 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:49:49.472658   75305 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:49:49.472673   75305 cache.go:56] Caching tarball of preloaded images
	I1028 18:49:49.472725   75305 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:49:49.472735   75305 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:49:49.472812   75305 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/config.json ...
	I1028 18:49:49.472828   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/config.json: {Name:mkd05012736481921965828582db47db8dd2ef34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:49.472937   75305 start.go:360] acquireMachinesLock for flannel-457876: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:49:54.403070   74377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:49:54.416616   74377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:49:54.434441   74377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:49:54.434484   74377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:49:54.444307   74377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:49:54.444362   74377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:49:54.454244   74377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:49:54.463980   74377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:49:54.474013   74377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:49:54.483933   74377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:49:54.493910   74377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:49:54.512520   74377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:49:54.522567   74377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:49:54.531539   74377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:49:54.531598   74377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:49:54.545158   74377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:49:54.553949   74377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:49:54.675413   74377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:49:54.763868   74377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:49:54.763943   74377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:49:54.768848   74377 start.go:563] Will wait 60s for crictl version
	I1028 18:49:54.768904   74377 ssh_runner.go:195] Run: which crictl
	I1028 18:49:54.772727   74377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:49:54.821138   74377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:49:54.821325   74377 ssh_runner.go:195] Run: crio --version
	I1028 18:49:54.855719   74377 ssh_runner.go:195] Run: crio --version
	I1028 18:49:54.895415   74377 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:49:53.479206   74640 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 18:49:53.479359   74640 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:49:53.479409   74640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:49:53.499485   74640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I1028 18:49:53.499951   74640 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:49:53.500505   74640 main.go:141] libmachine: Using API Version  1
	I1028 18:49:53.500529   74640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:49:53.500880   74640 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:49:53.501044   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetMachineName
	I1028 18:49:53.501207   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:49:53.501343   74640 start.go:159] libmachine.API.Create for "kindnet-457876" (driver="kvm2")
	I1028 18:49:53.501371   74640 client.go:168] LocalClient.Create starting
	I1028 18:49:53.501412   74640 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 18:49:53.501445   74640 main.go:141] libmachine: Decoding PEM data...
	I1028 18:49:53.501463   74640 main.go:141] libmachine: Parsing certificate...
	I1028 18:49:53.501544   74640 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 18:49:53.501575   74640 main.go:141] libmachine: Decoding PEM data...
	I1028 18:49:53.501597   74640 main.go:141] libmachine: Parsing certificate...
	I1028 18:49:53.501629   74640 main.go:141] libmachine: Running pre-create checks...
	I1028 18:49:53.501640   74640 main.go:141] libmachine: (kindnet-457876) Calling .PreCreateCheck
	I1028 18:49:53.501975   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetConfigRaw
	I1028 18:49:53.502352   74640 main.go:141] libmachine: Creating machine...
	I1028 18:49:53.502364   74640 main.go:141] libmachine: (kindnet-457876) Calling .Create
	I1028 18:49:53.502502   74640 main.go:141] libmachine: (kindnet-457876) Creating KVM machine...
	I1028 18:49:53.503435   74640 main.go:141] libmachine: (kindnet-457876) DBG | found existing default KVM network
	I1028 18:49:53.504273   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:53.504139   75370 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ad:d1:81} reservation:<nil>}
	I1028 18:49:53.504974   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:53.504912   75370 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:c8:05} reservation:<nil>}
	I1028 18:49:53.505954   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:53.505874   75370 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b0640}
	I1028 18:49:53.505979   74640 main.go:141] libmachine: (kindnet-457876) DBG | created network xml: 
	I1028 18:49:53.505999   74640 main.go:141] libmachine: (kindnet-457876) DBG | <network>
	I1028 18:49:53.506008   74640 main.go:141] libmachine: (kindnet-457876) DBG |   <name>mk-kindnet-457876</name>
	I1028 18:49:53.506015   74640 main.go:141] libmachine: (kindnet-457876) DBG |   <dns enable='no'/>
	I1028 18:49:53.506026   74640 main.go:141] libmachine: (kindnet-457876) DBG |   
	I1028 18:49:53.506034   74640 main.go:141] libmachine: (kindnet-457876) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1028 18:49:53.506062   74640 main.go:141] libmachine: (kindnet-457876) DBG |     <dhcp>
	I1028 18:49:53.506071   74640 main.go:141] libmachine: (kindnet-457876) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1028 18:49:53.506081   74640 main.go:141] libmachine: (kindnet-457876) DBG |     </dhcp>
	I1028 18:49:53.506087   74640 main.go:141] libmachine: (kindnet-457876) DBG |   </ip>
	I1028 18:49:53.506092   74640 main.go:141] libmachine: (kindnet-457876) DBG |   
	I1028 18:49:53.506097   74640 main.go:141] libmachine: (kindnet-457876) DBG | </network>
	I1028 18:49:53.506106   74640 main.go:141] libmachine: (kindnet-457876) DBG | 
	I1028 18:49:53.511247   74640 main.go:141] libmachine: (kindnet-457876) DBG | trying to create private KVM network mk-kindnet-457876 192.168.61.0/24...
	I1028 18:49:53.579044   74640 main.go:141] libmachine: (kindnet-457876) DBG | private KVM network mk-kindnet-457876 192.168.61.0/24 created
	I1028 18:49:53.579077   74640 main.go:141] libmachine: (kindnet-457876) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876 ...
	I1028 18:49:53.579090   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:53.579009   75370 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:49:53.579135   74640 main.go:141] libmachine: (kindnet-457876) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 18:49:53.579171   74640 main.go:141] libmachine: (kindnet-457876) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 18:49:53.840615   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:53.840459   75370 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa...
	I1028 18:49:54.198112   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:54.198010   75370 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/kindnet-457876.rawdisk...
	I1028 18:49:54.198133   74640 main.go:141] libmachine: (kindnet-457876) DBG | Writing magic tar header
	I1028 18:49:54.198142   74640 main.go:141] libmachine: (kindnet-457876) DBG | Writing SSH key tar header
	I1028 18:49:54.198154   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:54.198136   75370 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876 ...
	I1028 18:49:54.198254   74640 main.go:141] libmachine: (kindnet-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876
	I1028 18:49:54.198290   74640 main.go:141] libmachine: (kindnet-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876 (perms=drwx------)
	I1028 18:49:54.198312   74640 main.go:141] libmachine: (kindnet-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 18:49:54.198331   74640 main.go:141] libmachine: (kindnet-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 18:49:54.198350   74640 main.go:141] libmachine: (kindnet-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 18:49:54.198361   74640 main.go:141] libmachine: (kindnet-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 18:49:54.198379   74640 main.go:141] libmachine: (kindnet-457876) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 18:49:54.198392   74640 main.go:141] libmachine: (kindnet-457876) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 18:49:54.198484   74640 main.go:141] libmachine: (kindnet-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:49:54.198536   74640 main.go:141] libmachine: (kindnet-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 18:49:54.198553   74640 main.go:141] libmachine: (kindnet-457876) Creating domain...
	I1028 18:49:54.198566   74640 main.go:141] libmachine: (kindnet-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 18:49:54.198579   74640 main.go:141] libmachine: (kindnet-457876) DBG | Checking permissions on dir: /home/jenkins
	I1028 18:49:54.198590   74640 main.go:141] libmachine: (kindnet-457876) DBG | Checking permissions on dir: /home
	I1028 18:49:54.198607   74640 main.go:141] libmachine: (kindnet-457876) DBG | Skipping /home - not owner
	I1028 18:49:54.199580   74640 main.go:141] libmachine: (kindnet-457876) define libvirt domain using xml: 
	I1028 18:49:54.199603   74640 main.go:141] libmachine: (kindnet-457876) <domain type='kvm'>
	I1028 18:49:54.199613   74640 main.go:141] libmachine: (kindnet-457876)   <name>kindnet-457876</name>
	I1028 18:49:54.199620   74640 main.go:141] libmachine: (kindnet-457876)   <memory unit='MiB'>3072</memory>
	I1028 18:49:54.199626   74640 main.go:141] libmachine: (kindnet-457876)   <vcpu>2</vcpu>
	I1028 18:49:54.199629   74640 main.go:141] libmachine: (kindnet-457876)   <features>
	I1028 18:49:54.199634   74640 main.go:141] libmachine: (kindnet-457876)     <acpi/>
	I1028 18:49:54.199647   74640 main.go:141] libmachine: (kindnet-457876)     <apic/>
	I1028 18:49:54.199651   74640 main.go:141] libmachine: (kindnet-457876)     <pae/>
	I1028 18:49:54.199655   74640 main.go:141] libmachine: (kindnet-457876)     
	I1028 18:49:54.199660   74640 main.go:141] libmachine: (kindnet-457876)   </features>
	I1028 18:49:54.199664   74640 main.go:141] libmachine: (kindnet-457876)   <cpu mode='host-passthrough'>
	I1028 18:49:54.199668   74640 main.go:141] libmachine: (kindnet-457876)   
	I1028 18:49:54.199672   74640 main.go:141] libmachine: (kindnet-457876)   </cpu>
	I1028 18:49:54.199676   74640 main.go:141] libmachine: (kindnet-457876)   <os>
	I1028 18:49:54.199680   74640 main.go:141] libmachine: (kindnet-457876)     <type>hvm</type>
	I1028 18:49:54.199684   74640 main.go:141] libmachine: (kindnet-457876)     <boot dev='cdrom'/>
	I1028 18:49:54.199688   74640 main.go:141] libmachine: (kindnet-457876)     <boot dev='hd'/>
	I1028 18:49:54.199693   74640 main.go:141] libmachine: (kindnet-457876)     <bootmenu enable='no'/>
	I1028 18:49:54.199700   74640 main.go:141] libmachine: (kindnet-457876)   </os>
	I1028 18:49:54.199712   74640 main.go:141] libmachine: (kindnet-457876)   <devices>
	I1028 18:49:54.199719   74640 main.go:141] libmachine: (kindnet-457876)     <disk type='file' device='cdrom'>
	I1028 18:49:54.199727   74640 main.go:141] libmachine: (kindnet-457876)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/boot2docker.iso'/>
	I1028 18:49:54.199735   74640 main.go:141] libmachine: (kindnet-457876)       <target dev='hdc' bus='scsi'/>
	I1028 18:49:54.199748   74640 main.go:141] libmachine: (kindnet-457876)       <readonly/>
	I1028 18:49:54.199752   74640 main.go:141] libmachine: (kindnet-457876)     </disk>
	I1028 18:49:54.199763   74640 main.go:141] libmachine: (kindnet-457876)     <disk type='file' device='disk'>
	I1028 18:49:54.199769   74640 main.go:141] libmachine: (kindnet-457876)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 18:49:54.199776   74640 main.go:141] libmachine: (kindnet-457876)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/kindnet-457876.rawdisk'/>
	I1028 18:49:54.199782   74640 main.go:141] libmachine: (kindnet-457876)       <target dev='hda' bus='virtio'/>
	I1028 18:49:54.199787   74640 main.go:141] libmachine: (kindnet-457876)     </disk>
	I1028 18:49:54.199791   74640 main.go:141] libmachine: (kindnet-457876)     <interface type='network'>
	I1028 18:49:54.199796   74640 main.go:141] libmachine: (kindnet-457876)       <source network='mk-kindnet-457876'/>
	I1028 18:49:54.199800   74640 main.go:141] libmachine: (kindnet-457876)       <model type='virtio'/>
	I1028 18:49:54.199805   74640 main.go:141] libmachine: (kindnet-457876)     </interface>
	I1028 18:49:54.199808   74640 main.go:141] libmachine: (kindnet-457876)     <interface type='network'>
	I1028 18:49:54.199813   74640 main.go:141] libmachine: (kindnet-457876)       <source network='default'/>
	I1028 18:49:54.199820   74640 main.go:141] libmachine: (kindnet-457876)       <model type='virtio'/>
	I1028 18:49:54.199825   74640 main.go:141] libmachine: (kindnet-457876)     </interface>
	I1028 18:49:54.199829   74640 main.go:141] libmachine: (kindnet-457876)     <serial type='pty'>
	I1028 18:49:54.199833   74640 main.go:141] libmachine: (kindnet-457876)       <target port='0'/>
	I1028 18:49:54.199839   74640 main.go:141] libmachine: (kindnet-457876)     </serial>
	I1028 18:49:54.199843   74640 main.go:141] libmachine: (kindnet-457876)     <console type='pty'>
	I1028 18:49:54.199847   74640 main.go:141] libmachine: (kindnet-457876)       <target type='serial' port='0'/>
	I1028 18:49:54.199851   74640 main.go:141] libmachine: (kindnet-457876)     </console>
	I1028 18:49:54.199855   74640 main.go:141] libmachine: (kindnet-457876)     <rng model='virtio'>
	I1028 18:49:54.199883   74640 main.go:141] libmachine: (kindnet-457876)       <backend model='random'>/dev/random</backend>
	I1028 18:49:54.199913   74640 main.go:141] libmachine: (kindnet-457876)     </rng>
	I1028 18:49:54.199923   74640 main.go:141] libmachine: (kindnet-457876)     
	I1028 18:49:54.199929   74640 main.go:141] libmachine: (kindnet-457876)     
	I1028 18:49:54.199937   74640 main.go:141] libmachine: (kindnet-457876)   </devices>
	I1028 18:49:54.199943   74640 main.go:141] libmachine: (kindnet-457876) </domain>
	I1028 18:49:54.199952   74640 main.go:141] libmachine: (kindnet-457876) 
	I1028 18:49:54.204123   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:9f:7e:d8 in network default
	I1028 18:49:54.204697   74640 main.go:141] libmachine: (kindnet-457876) Ensuring networks are active...
	I1028 18:49:54.204718   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:54.205345   74640 main.go:141] libmachine: (kindnet-457876) Ensuring network default is active
	I1028 18:49:54.205651   74640 main.go:141] libmachine: (kindnet-457876) Ensuring network mk-kindnet-457876 is active
	I1028 18:49:54.206041   74640 main.go:141] libmachine: (kindnet-457876) Getting domain xml...
	I1028 18:49:54.206703   74640 main.go:141] libmachine: (kindnet-457876) Creating domain...
	I1028 18:49:54.896668   74377 main.go:141] libmachine: (auto-457876) Calling .GetIP
	I1028 18:49:54.899576   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:54.899992   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:49:54.900019   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:49:54.900195   74377 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:49:54.904181   74377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:49:54.917356   74377 kubeadm.go:883] updating cluster {Name:auto-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:auto-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:49:54.917453   74377 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:49:54.917502   74377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:49:54.949346   74377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:49:54.949411   74377 ssh_runner.go:195] Run: which lz4
	I1028 18:49:54.953178   74377 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:49:54.957119   74377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:49:54.957147   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:49:56.362096   74377 crio.go:462] duration metric: took 1.408943182s to copy over tarball
	I1028 18:49:56.362167   74377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:49:58.600340   74377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.238134263s)
	I1028 18:49:58.600374   74377 crio.go:469] duration metric: took 2.238250013s to extract the tarball
	I1028 18:49:58.600384   74377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:49:58.638812   74377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:49:58.680087   74377 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:49:58.680111   74377 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:49:58.680120   74377 kubeadm.go:934] updating node { 192.168.50.36 8443 v1.31.2 crio true true} ...
	I1028 18:49:58.680260   74377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-457876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:auto-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:49:58.680362   74377 ssh_runner.go:195] Run: crio config
	I1028 18:49:58.729454   74377 cni.go:84] Creating CNI manager for ""
	I1028 18:49:58.729480   74377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:49:58.729492   74377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:49:58.729519   74377 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-457876 NodeName:auto-457876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:49:58.729666   74377 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-457876"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.36"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:49:58.729727   74377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:49:58.739496   74377 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:49:58.739561   74377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:49:58.748656   74377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1028 18:49:58.764994   74377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:49:58.783184   74377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2288 bytes)
	I1028 18:49:58.800829   74377 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I1028 18:49:58.804786   74377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:49:58.816433   74377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:49:58.927471   74377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:49:58.944549   74377 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876 for IP: 192.168.50.36
	I1028 18:49:58.944569   74377 certs.go:194] generating shared ca certs ...
	I1028 18:49:58.944589   74377 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:58.944757   74377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:49:58.944816   74377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:49:58.944828   74377 certs.go:256] generating profile certs ...
	I1028 18:49:58.944894   74377 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/client.key
	I1028 18:49:58.944911   74377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/client.crt with IP's: []
	I1028 18:49:59.117113   74377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/client.crt ...
	I1028 18:49:59.117140   74377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/client.crt: {Name:mk7e7b0847f2ecc164fb262169827a85fc9c1ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:59.117305   74377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/client.key ...
	I1028 18:49:59.117315   74377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/client.key: {Name:mk195e426e3758e59668324a4b74073a463aa9c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:59.117386   74377 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.key.cb939323
	I1028 18:49:59.117400   74377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.crt.cb939323 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.36]
	I1028 18:49:59.236992   74377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.crt.cb939323 ...
	I1028 18:49:59.237019   74377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.crt.cb939323: {Name:mk59d55a61317025a53fed3641d5814da1d384df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:59.237164   74377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.key.cb939323 ...
	I1028 18:49:59.237176   74377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.key.cb939323: {Name:mkd183ab5678ec3fde85b52e91c54d0578ae93da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:59.237239   74377 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.crt.cb939323 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.crt
	I1028 18:49:59.237316   74377 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.key.cb939323 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.key
	I1028 18:49:59.237366   74377 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.key
	I1028 18:49:59.237379   74377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.crt with IP's: []
	I1028 18:49:55.539703   74640 main.go:141] libmachine: (kindnet-457876) Waiting to get IP...
	I1028 18:49:55.540512   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:55.541251   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:55.541354   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:55.541250   75370 retry.go:31] will retry after 246.210851ms: waiting for machine to come up
	I1028 18:49:55.788834   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:55.789430   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:55.789473   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:55.789351   75370 retry.go:31] will retry after 316.523202ms: waiting for machine to come up
	I1028 18:49:56.107818   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:56.108363   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:56.108390   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:56.108319   75370 retry.go:31] will retry after 295.704649ms: waiting for machine to come up
	I1028 18:49:56.405867   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:56.406319   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:56.406387   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:56.406294   75370 retry.go:31] will retry after 464.250041ms: waiting for machine to come up
	I1028 18:49:56.871745   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:56.872240   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:56.872272   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:56.872179   75370 retry.go:31] will retry after 606.843508ms: waiting for machine to come up
	I1028 18:49:57.481181   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:57.481640   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:57.481665   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:57.481591   75370 retry.go:31] will retry after 631.564894ms: waiting for machine to come up
	I1028 18:49:58.115507   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:58.115966   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:58.115992   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:58.115920   75370 retry.go:31] will retry after 1.187260091s: waiting for machine to come up
	I1028 18:49:59.304584   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:49:59.304975   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:49:59.305005   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:49:59.304935   75370 retry.go:31] will retry after 1.034530345s: waiting for machine to come up
	I1028 18:49:59.473292   74377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.crt ...
	I1028 18:49:59.473318   74377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.crt: {Name:mkef50ec90a89e60be4ad323b43095c6a9ee1470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:59.473511   74377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.key ...
	I1028 18:49:59.473528   74377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.key: {Name:mkd12a98bb642f35fa3eaac71aed6c764021a676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:49:59.473728   74377 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:49:59.473773   74377 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:49:59.473787   74377 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:49:59.473834   74377 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:49:59.473876   74377 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:49:59.473907   74377 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:49:59.473959   74377 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:49:59.474601   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:49:59.501684   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:49:59.527978   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:49:59.551398   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:49:59.575814   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1028 18:49:59.598948   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:49:59.623204   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:49:59.645887   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/auto-457876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:49:59.668069   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:49:59.691516   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:49:59.715939   74377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:49:59.738889   74377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:49:59.758821   74377 ssh_runner.go:195] Run: openssl version
	I1028 18:49:59.765663   74377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:49:59.777358   74377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:49:59.782183   74377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:49:59.782227   74377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:49:59.788254   74377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:49:59.799218   74377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:49:59.809859   74377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:49:59.814194   74377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:49:59.814255   74377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:49:59.819800   74377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:49:59.830144   74377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:49:59.841072   74377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:49:59.845519   74377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:49:59.845569   74377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:49:59.852902   74377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:49:59.863566   74377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:49:59.867633   74377 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 18:49:59.867694   74377 kubeadm.go:392] StartCluster: {Name:auto-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:auto-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:49:59.867783   74377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:49:59.867837   74377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:49:59.905734   74377 cri.go:89] found id: ""
	I1028 18:49:59.905811   74377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:49:59.916252   74377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:49:59.925351   74377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:49:59.934404   74377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:49:59.934428   74377 kubeadm.go:157] found existing configuration files:
	
	I1028 18:49:59.934477   74377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:49:59.943141   74377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:49:59.943201   74377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:49:59.952248   74377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:49:59.960796   74377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:49:59.960844   74377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:49:59.969975   74377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:49:59.979218   74377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:49:59.979271   74377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:49:59.989877   74377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:49:59.998272   74377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:49:59.998325   74377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:50:00.007013   74377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:50:00.166150   74377 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:50:00.341036   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:00.341546   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:50:00.341576   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:50:00.341492   75370 retry.go:31] will retry after 1.374795361s: waiting for machine to come up
	I1028 18:50:01.717493   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:01.717957   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:50:01.717983   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:50:01.717922   75370 retry.go:31] will retry after 1.418285511s: waiting for machine to come up
	I1028 18:50:03.138173   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:03.138687   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:50:03.138723   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:50:03.138615   75370 retry.go:31] will retry after 2.253874847s: waiting for machine to come up
	I1028 18:50:05.394035   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:05.394538   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:50:05.394569   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:50:05.394486   75370 retry.go:31] will retry after 3.549218861s: waiting for machine to come up
	I1028 18:50:08.944769   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:08.945198   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:50:08.945219   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:50:08.945149   75370 retry.go:31] will retry after 3.062188846s: waiting for machine to come up
	I1028 18:50:10.621983   74377 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:50:10.622052   74377 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:50:10.622161   74377 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:50:10.622302   74377 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:50:10.622444   74377 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:50:10.622501   74377 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:50:10.624072   74377 out.go:235]   - Generating certificates and keys ...
	I1028 18:50:10.624138   74377 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:50:10.624206   74377 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:50:10.624282   74377 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 18:50:10.624338   74377 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 18:50:10.624393   74377 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 18:50:10.624437   74377 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 18:50:10.624506   74377 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 18:50:10.624662   74377 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-457876 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	I1028 18:50:10.624764   74377 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 18:50:10.624936   74377 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-457876 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	I1028 18:50:10.625030   74377 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 18:50:10.625127   74377 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 18:50:10.625194   74377 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 18:50:10.625279   74377 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:50:10.625352   74377 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:50:10.625435   74377 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:50:10.625512   74377 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:50:10.625613   74377 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:50:10.625692   74377 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:50:10.625820   74377 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:50:10.625910   74377 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:50:10.627078   74377 out.go:235]   - Booting up control plane ...
	I1028 18:50:10.627160   74377 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:50:10.627240   74377 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:50:10.627308   74377 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:50:10.627410   74377 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:50:10.627486   74377 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:50:10.627528   74377 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:50:10.627666   74377 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:50:10.627809   74377 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:50:10.627899   74377 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001976097s
	I1028 18:50:10.627996   74377 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:50:10.628051   74377 kubeadm.go:310] [api-check] The API server is healthy after 5.002721555s
	I1028 18:50:10.628148   74377 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:50:10.628258   74377 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:50:10.628311   74377 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:50:10.628514   74377 kubeadm.go:310] [mark-control-plane] Marking the node auto-457876 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:50:10.628596   74377 kubeadm.go:310] [bootstrap-token] Using token: 4a9lf6.emusx74ufksta9sq
	I1028 18:50:10.629897   74377 out.go:235]   - Configuring RBAC rules ...
	I1028 18:50:10.629988   74377 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:50:10.630091   74377 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:50:10.630222   74377 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:50:10.630344   74377 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:50:10.630443   74377 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:50:10.630515   74377 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:50:10.630624   74377 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:50:10.630669   74377 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:50:10.630716   74377 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:50:10.630723   74377 kubeadm.go:310] 
	I1028 18:50:10.630781   74377 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:50:10.630794   74377 kubeadm.go:310] 
	I1028 18:50:10.630882   74377 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:50:10.630891   74377 kubeadm.go:310] 
	I1028 18:50:10.630924   74377 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:50:10.631005   74377 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:50:10.631079   74377 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:50:10.631090   74377 kubeadm.go:310] 
	I1028 18:50:10.631133   74377 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:50:10.631139   74377 kubeadm.go:310] 
	I1028 18:50:10.631177   74377 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:50:10.631183   74377 kubeadm.go:310] 
	I1028 18:50:10.631225   74377 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:50:10.631297   74377 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:50:10.631365   74377 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:50:10.631375   74377 kubeadm.go:310] 
	I1028 18:50:10.631445   74377 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:50:10.631525   74377 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:50:10.631536   74377 kubeadm.go:310] 
	I1028 18:50:10.631609   74377 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4a9lf6.emusx74ufksta9sq \
	I1028 18:50:10.631697   74377 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:50:10.631718   74377 kubeadm.go:310] 	--control-plane 
	I1028 18:50:10.631722   74377 kubeadm.go:310] 
	I1028 18:50:10.631793   74377 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:50:10.631799   74377 kubeadm.go:310] 
	I1028 18:50:10.631870   74377 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4a9lf6.emusx74ufksta9sq \
	I1028 18:50:10.631966   74377 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:50:10.631983   74377 cni.go:84] Creating CNI manager for ""
	I1028 18:50:10.631993   74377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:50:10.633508   74377 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:50:10.634684   74377 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:50:10.646482   74377 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:50:10.664433   74377 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:50:10.664516   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:10.664526   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-457876 minikube.k8s.io/updated_at=2024_10_28T18_50_10_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=auto-457876 minikube.k8s.io/primary=true
	I1028 18:50:10.831099   74377 ops.go:34] apiserver oom_adj: -16
	I1028 18:50:10.831259   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:11.332295   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:11.832210   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:12.332151   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:12.831337   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:13.332020   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:13.831373   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:14.332185   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:14.832093   74377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:14.948105   74377 kubeadm.go:1113] duration metric: took 4.283659232s to wait for elevateKubeSystemPrivileges
	I1028 18:50:14.948150   74377 kubeadm.go:394] duration metric: took 15.080466212s to StartCluster
	I1028 18:50:14.948173   74377 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:14.948266   74377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:50:14.949614   74377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:14.949864   74377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 18:50:14.949882   74377 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:50:14.949943   74377 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:50:14.950042   74377 addons.go:69] Setting storage-provisioner=true in profile "auto-457876"
	I1028 18:50:14.950063   74377 addons.go:234] Setting addon storage-provisioner=true in "auto-457876"
	I1028 18:50:14.950077   74377 addons.go:69] Setting default-storageclass=true in profile "auto-457876"
	I1028 18:50:14.950102   74377 host.go:66] Checking if "auto-457876" exists ...
	I1028 18:50:14.950109   74377 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-457876"
	I1028 18:50:14.950099   74377 config.go:182] Loaded profile config "auto-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:50:14.950581   74377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:14.950581   74377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:14.950632   74377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:14.950643   74377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:14.951287   74377 out.go:177] * Verifying Kubernetes components...
	I1028 18:50:14.952580   74377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:50:14.965768   74377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33977
	I1028 18:50:14.965770   74377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I1028 18:50:14.966224   74377 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:14.966237   74377 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:14.966694   74377 main.go:141] libmachine: Using API Version  1
	I1028 18:50:14.966724   74377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:14.966745   74377 main.go:141] libmachine: Using API Version  1
	I1028 18:50:14.966761   74377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:14.967077   74377 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:14.967077   74377 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:14.967241   74377 main.go:141] libmachine: (auto-457876) Calling .GetState
	I1028 18:50:14.967613   74377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:14.967655   74377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:14.970558   74377 addons.go:234] Setting addon default-storageclass=true in "auto-457876"
	I1028 18:50:14.970599   74377 host.go:66] Checking if "auto-457876" exists ...
	I1028 18:50:14.970938   74377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:14.970978   74377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:14.982073   74377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I1028 18:50:14.982523   74377 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:14.982957   74377 main.go:141] libmachine: Using API Version  1
	I1028 18:50:14.982974   74377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:14.983330   74377 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:14.983508   74377 main.go:141] libmachine: (auto-457876) Calling .GetState
	I1028 18:50:14.985254   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:50:14.986000   74377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I1028 18:50:14.986311   74377 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:14.986723   74377 main.go:141] libmachine: Using API Version  1
	I1028 18:50:14.986745   74377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:14.986944   74377 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:50:12.008412   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:12.008801   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find current IP address of domain kindnet-457876 in network mk-kindnet-457876
	I1028 18:50:12.008825   74640 main.go:141] libmachine: (kindnet-457876) DBG | I1028 18:50:12.008758   75370 retry.go:31] will retry after 4.911436358s: waiting for machine to come up
	I1028 18:50:14.987048   74377 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:14.987541   74377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:14.987624   74377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:14.988228   74377 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:50:14.988244   74377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:50:14.988262   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:50:14.991115   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:50:14.991521   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:50:14.991551   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:50:14.991749   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:50:14.991896   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:50:14.992116   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:50:14.992236   74377 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa Username:docker}
	I1028 18:50:15.002819   74377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1028 18:50:15.003319   74377 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:15.003796   74377 main.go:141] libmachine: Using API Version  1
	I1028 18:50:15.003819   74377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:15.004100   74377 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:15.004283   74377 main.go:141] libmachine: (auto-457876) Calling .GetState
	I1028 18:50:15.005665   74377 main.go:141] libmachine: (auto-457876) Calling .DriverName
	I1028 18:50:15.005859   74377 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:50:15.005873   74377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:50:15.005888   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHHostname
	I1028 18:50:15.008221   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:50:15.008519   74377 main.go:141] libmachine: (auto-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:e8", ip: ""} in network mk-auto-457876: {Iface:virbr2 ExpiryTime:2024-10-28 19:49:43 +0000 UTC Type:0 Mac:52:54:00:f2:3a:e8 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:auto-457876 Clientid:01:52:54:00:f2:3a:e8}
	I1028 18:50:15.008547   74377 main.go:141] libmachine: (auto-457876) DBG | domain auto-457876 has defined IP address 192.168.50.36 and MAC address 52:54:00:f2:3a:e8 in network mk-auto-457876
	I1028 18:50:15.008656   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHPort
	I1028 18:50:15.008820   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHKeyPath
	I1028 18:50:15.008972   74377 main.go:141] libmachine: (auto-457876) Calling .GetSSHUsername
	I1028 18:50:15.009101   74377 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/auto-457876/id_rsa Username:docker}
	I1028 18:50:15.279316   74377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:50:15.279455   74377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 18:50:15.286272   74377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:50:15.377737   74377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:50:15.933975   74377 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1028 18:50:15.934056   74377 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:15.934079   74377 main.go:141] libmachine: (auto-457876) Calling .Close
	I1028 18:50:15.934462   74377 main.go:141] libmachine: (auto-457876) DBG | Closing plugin on server side
	I1028 18:50:15.934487   74377 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:15.934501   74377 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:15.934510   74377 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:15.934522   74377 main.go:141] libmachine: (auto-457876) Calling .Close
	I1028 18:50:15.934780   74377 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:15.934794   74377 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:15.935235   74377 node_ready.go:35] waiting up to 15m0s for node "auto-457876" to be "Ready" ...
	I1028 18:50:15.955942   74377 node_ready.go:49] node "auto-457876" has status "Ready":"True"
	I1028 18:50:15.955965   74377 node_ready.go:38] duration metric: took 20.694806ms for node "auto-457876" to be "Ready" ...
	I1028 18:50:15.955976   74377 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:50:15.962722   74377 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:15.962743   74377 main.go:141] libmachine: (auto-457876) Calling .Close
	I1028 18:50:15.962985   74377 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:15.963005   74377 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:15.971177   74377 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:16.445376   74377 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-457876" context rescaled to 1 replicas
	I1028 18:50:16.491586   74377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.113815138s)
	I1028 18:50:16.491641   74377 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:16.491654   74377 main.go:141] libmachine: (auto-457876) Calling .Close
	I1028 18:50:16.491978   74377 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:16.492001   74377 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:16.492011   74377 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:16.492020   74377 main.go:141] libmachine: (auto-457876) Calling .Close
	I1028 18:50:16.492274   74377 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:16.492290   74377 main.go:141] libmachine: (auto-457876) DBG | Closing plugin on server side
	I1028 18:50:16.492298   74377 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:16.494165   74377 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 18:50:16.495431   74377 addons.go:510] duration metric: took 1.545488423s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 18:50:17.977885   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:16.921524   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:16.921920   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has current primary IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:16.921946   74640 main.go:141] libmachine: (kindnet-457876) Found IP for machine: 192.168.61.41
	I1028 18:50:16.921962   74640 main.go:141] libmachine: (kindnet-457876) Reserving static IP address...
	I1028 18:50:16.922184   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find host DHCP lease matching {name: "kindnet-457876", mac: "52:54:00:cd:e5:15", ip: "192.168.61.41"} in network mk-kindnet-457876
	I1028 18:50:16.997846   74640 main.go:141] libmachine: (kindnet-457876) Reserved static IP address: 192.168.61.41
	I1028 18:50:16.997872   74640 main.go:141] libmachine: (kindnet-457876) Waiting for SSH to be available...
	I1028 18:50:16.997881   74640 main.go:141] libmachine: (kindnet-457876) DBG | Getting to WaitForSSH function...
	I1028 18:50:17.000732   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:17.001099   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876
	I1028 18:50:17.001127   74640 main.go:141] libmachine: (kindnet-457876) DBG | unable to find defined IP address of network mk-kindnet-457876 interface with MAC address 52:54:00:cd:e5:15
	I1028 18:50:17.001255   74640 main.go:141] libmachine: (kindnet-457876) DBG | Using SSH client type: external
	I1028 18:50:17.001278   74640 main.go:141] libmachine: (kindnet-457876) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa (-rw-------)
	I1028 18:50:17.001311   74640 main.go:141] libmachine: (kindnet-457876) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:50:17.001328   74640 main.go:141] libmachine: (kindnet-457876) DBG | About to run SSH command:
	I1028 18:50:17.001340   74640 main.go:141] libmachine: (kindnet-457876) DBG | exit 0
	I1028 18:50:17.004954   74640 main.go:141] libmachine: (kindnet-457876) DBG | SSH cmd err, output: exit status 255: 
	I1028 18:50:17.004978   74640 main.go:141] libmachine: (kindnet-457876) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 18:50:17.004988   74640 main.go:141] libmachine: (kindnet-457876) DBG | command : exit 0
	I1028 18:50:17.004999   74640 main.go:141] libmachine: (kindnet-457876) DBG | err     : exit status 255
	I1028 18:50:17.005029   74640 main.go:141] libmachine: (kindnet-457876) DBG | output  : 
	I1028 18:50:20.005154   74640 main.go:141] libmachine: (kindnet-457876) DBG | Getting to WaitForSSH function...
	I1028 18:50:20.007519   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.007837   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.007867   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.007951   74640 main.go:141] libmachine: (kindnet-457876) DBG | Using SSH client type: external
	I1028 18:50:20.007978   74640 main.go:141] libmachine: (kindnet-457876) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa (-rw-------)
	I1028 18:50:20.008005   74640 main.go:141] libmachine: (kindnet-457876) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:50:20.008021   74640 main.go:141] libmachine: (kindnet-457876) DBG | About to run SSH command:
	I1028 18:50:20.008033   74640 main.go:141] libmachine: (kindnet-457876) DBG | exit 0
	I1028 18:50:20.128417   74640 main.go:141] libmachine: (kindnet-457876) DBG | SSH cmd err, output: <nil>: 
	I1028 18:50:20.128702   74640 main.go:141] libmachine: (kindnet-457876) KVM machine creation complete!
	I1028 18:50:20.129000   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetConfigRaw
	I1028 18:50:20.129535   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:20.129753   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:20.129915   74640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 18:50:20.129930   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetState
	I1028 18:50:20.131363   74640 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 18:50:20.131381   74640 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 18:50:20.131388   74640 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 18:50:20.131395   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:20.134028   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.134424   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.134443   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.134565   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:20.134735   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.134883   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.135028   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:20.135159   74640 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:20.135378   74640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I1028 18:50:20.135391   74640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 18:50:20.235772   74640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:50:20.235792   74640 main.go:141] libmachine: Detecting the provisioner...
	I1028 18:50:20.235800   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:20.238344   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.238722   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.238752   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.238904   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:20.239099   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.239253   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.239418   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:20.239567   74640 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:20.239765   74640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I1028 18:50:20.239778   74640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 18:50:21.473441   75305 start.go:364] duration metric: took 32.000482161s to acquireMachinesLock for "flannel-457876"
	I1028 18:50:21.473502   75305 start.go:93] Provisioning new machine with config: &{Name:flannel-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:flannel-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:50:21.473614   75305 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 18:50:20.341003   74640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 18:50:20.341115   74640 main.go:141] libmachine: found compatible host: buildroot
	I1028 18:50:20.341133   74640 main.go:141] libmachine: Provisioning with buildroot...
	I1028 18:50:20.341143   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetMachineName
	I1028 18:50:20.341397   74640 buildroot.go:166] provisioning hostname "kindnet-457876"
	I1028 18:50:20.341421   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetMachineName
	I1028 18:50:20.341632   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:20.344333   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.344706   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.344732   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.344853   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:20.345026   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.345173   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.345281   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:20.345428   74640 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:20.345657   74640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I1028 18:50:20.345676   74640 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-457876 && echo "kindnet-457876" | sudo tee /etc/hostname
	I1028 18:50:20.460518   74640 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-457876
	
	I1028 18:50:20.460555   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:20.463401   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.463835   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.463863   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.464099   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:20.464274   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.464496   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.464651   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:20.464831   74640 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:20.465056   74640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I1028 18:50:20.465077   74640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-457876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-457876/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-457876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:50:20.573636   74640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:50:20.573670   74640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:50:20.573708   74640 buildroot.go:174] setting up certificates
	I1028 18:50:20.573723   74640 provision.go:84] configureAuth start
	I1028 18:50:20.573743   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetMachineName
	I1028 18:50:20.574038   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetIP
	I1028 18:50:20.576834   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.577192   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.577221   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.577407   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:20.579648   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.580037   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.580066   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.580235   74640 provision.go:143] copyHostCerts
	I1028 18:50:20.580307   74640 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:50:20.580324   74640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:50:20.580401   74640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:50:20.580552   74640 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:50:20.580562   74640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:50:20.580590   74640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:50:20.580674   74640 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:50:20.580685   74640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:50:20.580714   74640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:50:20.580791   74640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.kindnet-457876 san=[127.0.0.1 192.168.61.41 kindnet-457876 localhost minikube]
	I1028 18:50:20.852406   74640 provision.go:177] copyRemoteCerts
	I1028 18:50:20.852489   74640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:50:20.852519   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:20.855081   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.855418   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:20.855451   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:20.855599   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:20.855782   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:20.855906   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:20.856024   74640 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa Username:docker}
	I1028 18:50:20.935519   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:50:20.960092   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1028 18:50:20.984843   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:50:21.010443   74640 provision.go:87] duration metric: took 436.703595ms to configureAuth
	I1028 18:50:21.010470   74640 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:50:21.010625   74640 config.go:182] Loaded profile config "kindnet-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:50:21.010701   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:21.013641   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.013981   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.014016   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.014340   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:21.014516   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:21.014638   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:21.014759   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:21.014902   74640 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:21.015061   74640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I1028 18:50:21.015077   74640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:50:21.234589   74640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:50:21.234620   74640 main.go:141] libmachine: Checking connection to Docker...
	I1028 18:50:21.234631   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetURL
	I1028 18:50:21.235896   74640 main.go:141] libmachine: (kindnet-457876) DBG | Using libvirt version 6000000
	I1028 18:50:21.238089   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.238460   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.238490   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.238628   74640 main.go:141] libmachine: Docker is up and running!
	I1028 18:50:21.238643   74640 main.go:141] libmachine: Reticulating splines...
	I1028 18:50:21.238651   74640 client.go:171] duration metric: took 27.73726968s to LocalClient.Create
	I1028 18:50:21.238675   74640 start.go:167] duration metric: took 27.737333938s to libmachine.API.Create "kindnet-457876"
	I1028 18:50:21.238686   74640 start.go:293] postStartSetup for "kindnet-457876" (driver="kvm2")
	I1028 18:50:21.238695   74640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:50:21.238713   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:21.238960   74640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:50:21.238986   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:21.241131   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.241434   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.241455   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.241602   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:21.241761   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:21.241934   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:21.242086   74640 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa Username:docker}
	I1028 18:50:21.326559   74640 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:50:21.330682   74640 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:50:21.330711   74640 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:50:21.330795   74640 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:50:21.330894   74640 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:50:21.330989   74640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:50:21.340048   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:50:21.364486   74640 start.go:296] duration metric: took 125.768553ms for postStartSetup
	I1028 18:50:21.364539   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetConfigRaw
	I1028 18:50:21.365062   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetIP
	I1028 18:50:21.367522   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.367868   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.367925   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.368149   74640 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/config.json ...
	I1028 18:50:21.368362   74640 start.go:128] duration metric: took 27.891081718s to createHost
	I1028 18:50:21.368385   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:21.370478   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.370798   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.370823   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.371007   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:21.371171   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:21.371295   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:21.371405   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:21.371524   74640 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:21.371664   74640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I1028 18:50:21.371673   74640 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:50:21.473229   74640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730141421.450064965
	
	I1028 18:50:21.473254   74640 fix.go:216] guest clock: 1730141421.450064965
	I1028 18:50:21.473265   74640 fix.go:229] Guest: 2024-10-28 18:50:21.450064965 +0000 UTC Remote: 2024-10-28 18:50:21.368374479 +0000 UTC m=+56.126947373 (delta=81.690486ms)
	I1028 18:50:21.473321   74640 fix.go:200] guest clock delta is within tolerance: 81.690486ms
	I1028 18:50:21.473340   74640 start.go:83] releasing machines lock for "kindnet-457876", held for 27.996244742s
	I1028 18:50:21.473375   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:21.473648   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetIP
	I1028 18:50:21.476693   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.477133   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.477162   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.477336   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:21.477810   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:21.478000   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:21.478077   74640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:50:21.478148   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:21.478190   74640 ssh_runner.go:195] Run: cat /version.json
	I1028 18:50:21.478247   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:21.480960   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.480986   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.481311   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.481343   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.481371   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:21.481384   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:21.481603   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:21.481766   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:21.481769   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:21.481945   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:21.481966   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:21.482111   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:21.482111   74640 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa Username:docker}
	I1028 18:50:21.482220   74640 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa Username:docker}
	I1028 18:50:21.579587   74640 ssh_runner.go:195] Run: systemctl --version
	I1028 18:50:21.587237   74640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:50:21.747037   74640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:50:21.755142   74640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:50:21.755246   74640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:50:21.772502   74640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:50:21.772530   74640 start.go:495] detecting cgroup driver to use...
	I1028 18:50:21.772603   74640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:50:21.787719   74640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:50:21.801179   74640 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:50:21.801248   74640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:50:21.814818   74640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:50:21.828161   74640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:50:21.950069   74640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:50:22.112692   74640 docker.go:233] disabling docker service ...
	I1028 18:50:22.112759   74640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:50:22.128299   74640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:50:22.147341   74640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:50:22.275293   74640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:50:22.400975   74640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:50:22.415789   74640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:50:22.435821   74640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:50:22.435887   74640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:22.447009   74640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:50:22.447068   74640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:22.458016   74640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:22.468665   74640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:22.479818   74640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:50:22.491096   74640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:22.501892   74640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:22.521135   74640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:22.531828   74640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:50:22.541429   74640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:50:22.541505   74640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:50:22.555282   74640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:50:22.565164   74640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:50:22.693975   74640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:50:22.806955   74640 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:50:22.807033   74640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:50:22.811841   74640 start.go:563] Will wait 60s for crictl version
	I1028 18:50:22.811911   74640 ssh_runner.go:195] Run: which crictl
	I1028 18:50:22.816194   74640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:50:22.862497   74640 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:50:22.862562   74640 ssh_runner.go:195] Run: crio --version
	I1028 18:50:22.895398   74640 ssh_runner.go:195] Run: crio --version
	I1028 18:50:22.929913   74640 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:50:19.978065   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:22.477539   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:21.476464   75305 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 18:50:21.476661   75305 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:21.476713   75305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:21.495811   75305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1028 18:50:21.496234   75305 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:21.496820   75305 main.go:141] libmachine: Using API Version  1
	I1028 18:50:21.496844   75305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:21.497236   75305 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:21.497494   75305 main.go:141] libmachine: (flannel-457876) Calling .GetMachineName
	I1028 18:50:21.497676   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:21.497828   75305 start.go:159] libmachine.API.Create for "flannel-457876" (driver="kvm2")
	I1028 18:50:21.497857   75305 client.go:168] LocalClient.Create starting
	I1028 18:50:21.497899   75305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem
	I1028 18:50:21.497940   75305 main.go:141] libmachine: Decoding PEM data...
	I1028 18:50:21.497964   75305 main.go:141] libmachine: Parsing certificate...
	I1028 18:50:21.498045   75305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem
	I1028 18:50:21.498077   75305 main.go:141] libmachine: Decoding PEM data...
	I1028 18:50:21.498099   75305 main.go:141] libmachine: Parsing certificate...
	I1028 18:50:21.498123   75305 main.go:141] libmachine: Running pre-create checks...
	I1028 18:50:21.498141   75305 main.go:141] libmachine: (flannel-457876) Calling .PreCreateCheck
	I1028 18:50:21.498476   75305 main.go:141] libmachine: (flannel-457876) Calling .GetConfigRaw
	I1028 18:50:21.498841   75305 main.go:141] libmachine: Creating machine...
	I1028 18:50:21.498856   75305 main.go:141] libmachine: (flannel-457876) Calling .Create
	I1028 18:50:21.498959   75305 main.go:141] libmachine: (flannel-457876) Creating KVM machine...
	I1028 18:50:21.500261   75305 main.go:141] libmachine: (flannel-457876) DBG | found existing default KVM network
	I1028 18:50:21.501540   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.501389   75693 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ad:d1:81} reservation:<nil>}
	I1028 18:50:21.502560   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.502480   75693 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:c8:05} reservation:<nil>}
	I1028 18:50:21.503483   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.503383   75693 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a0:5e:f6} reservation:<nil>}
	I1028 18:50:21.504727   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.504662   75693 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a1980}
	I1028 18:50:21.504756   75305 main.go:141] libmachine: (flannel-457876) DBG | created network xml: 
	I1028 18:50:21.504766   75305 main.go:141] libmachine: (flannel-457876) DBG | <network>
	I1028 18:50:21.504775   75305 main.go:141] libmachine: (flannel-457876) DBG |   <name>mk-flannel-457876</name>
	I1028 18:50:21.504796   75305 main.go:141] libmachine: (flannel-457876) DBG |   <dns enable='no'/>
	I1028 18:50:21.504813   75305 main.go:141] libmachine: (flannel-457876) DBG |   
	I1028 18:50:21.504851   75305 main.go:141] libmachine: (flannel-457876) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1028 18:50:21.504869   75305 main.go:141] libmachine: (flannel-457876) DBG |     <dhcp>
	I1028 18:50:21.504879   75305 main.go:141] libmachine: (flannel-457876) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1028 18:50:21.504892   75305 main.go:141] libmachine: (flannel-457876) DBG |     </dhcp>
	I1028 18:50:21.504906   75305 main.go:141] libmachine: (flannel-457876) DBG |   </ip>
	I1028 18:50:21.504912   75305 main.go:141] libmachine: (flannel-457876) DBG |   
	I1028 18:50:21.504921   75305 main.go:141] libmachine: (flannel-457876) DBG | </network>
	I1028 18:50:21.504929   75305 main.go:141] libmachine: (flannel-457876) DBG | 
	I1028 18:50:21.510009   75305 main.go:141] libmachine: (flannel-457876) DBG | trying to create private KVM network mk-flannel-457876 192.168.72.0/24...
	I1028 18:50:21.581011   75305 main.go:141] libmachine: (flannel-457876) DBG | private KVM network mk-flannel-457876 192.168.72.0/24 created
	I1028 18:50:21.581038   75305 main.go:141] libmachine: (flannel-457876) Setting up store path in /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876 ...
	I1028 18:50:21.581051   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.581002   75693 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:50:21.581084   75305 main.go:141] libmachine: (flannel-457876) Building disk image from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 18:50:21.581267   75305 main.go:141] libmachine: (flannel-457876) Downloading /home/jenkins/minikube-integration/19872-13443/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso...
	I1028 18:50:21.834883   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.834777   75693 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa...
	I1028 18:50:21.923050   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.922908   75693 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/flannel-457876.rawdisk...
	I1028 18:50:21.923082   75305 main.go:141] libmachine: (flannel-457876) DBG | Writing magic tar header
	I1028 18:50:21.923097   75305 main.go:141] libmachine: (flannel-457876) DBG | Writing SSH key tar header
	I1028 18:50:21.923109   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:21.923065   75693 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876 ...
	I1028 18:50:21.923257   75305 main.go:141] libmachine: (flannel-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876
	I1028 18:50:21.923295   75305 main.go:141] libmachine: (flannel-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876 (perms=drwx------)
	I1028 18:50:21.923306   75305 main.go:141] libmachine: (flannel-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube/machines
	I1028 18:50:21.923321   75305 main.go:141] libmachine: (flannel-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube/machines (perms=drwxr-xr-x)
	I1028 18:50:21.923344   75305 main.go:141] libmachine: (flannel-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:50:21.923362   75305 main.go:141] libmachine: (flannel-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19872-13443
	I1028 18:50:21.923375   75305 main.go:141] libmachine: (flannel-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443/.minikube (perms=drwxr-xr-x)
	I1028 18:50:21.923385   75305 main.go:141] libmachine: (flannel-457876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 18:50:21.923394   75305 main.go:141] libmachine: (flannel-457876) Setting executable bit set on /home/jenkins/minikube-integration/19872-13443 (perms=drwxrwxr-x)
	I1028 18:50:21.923406   75305 main.go:141] libmachine: (flannel-457876) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 18:50:21.923418   75305 main.go:141] libmachine: (flannel-457876) DBG | Checking permissions on dir: /home/jenkins
	I1028 18:50:21.923429   75305 main.go:141] libmachine: (flannel-457876) DBG | Checking permissions on dir: /home
	I1028 18:50:21.923439   75305 main.go:141] libmachine: (flannel-457876) DBG | Skipping /home - not owner
	I1028 18:50:21.923465   75305 main.go:141] libmachine: (flannel-457876) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 18:50:21.923508   75305 main.go:141] libmachine: (flannel-457876) Creating domain...
	I1028 18:50:21.924514   75305 main.go:141] libmachine: (flannel-457876) define libvirt domain using xml: 
	I1028 18:50:21.924531   75305 main.go:141] libmachine: (flannel-457876) <domain type='kvm'>
	I1028 18:50:21.924539   75305 main.go:141] libmachine: (flannel-457876)   <name>flannel-457876</name>
	I1028 18:50:21.924547   75305 main.go:141] libmachine: (flannel-457876)   <memory unit='MiB'>3072</memory>
	I1028 18:50:21.924554   75305 main.go:141] libmachine: (flannel-457876)   <vcpu>2</vcpu>
	I1028 18:50:21.924560   75305 main.go:141] libmachine: (flannel-457876)   <features>
	I1028 18:50:21.924573   75305 main.go:141] libmachine: (flannel-457876)     <acpi/>
	I1028 18:50:21.924580   75305 main.go:141] libmachine: (flannel-457876)     <apic/>
	I1028 18:50:21.924588   75305 main.go:141] libmachine: (flannel-457876)     <pae/>
	I1028 18:50:21.924612   75305 main.go:141] libmachine: (flannel-457876)     
	I1028 18:50:21.924623   75305 main.go:141] libmachine: (flannel-457876)   </features>
	I1028 18:50:21.924630   75305 main.go:141] libmachine: (flannel-457876)   <cpu mode='host-passthrough'>
	I1028 18:50:21.924640   75305 main.go:141] libmachine: (flannel-457876)   
	I1028 18:50:21.924646   75305 main.go:141] libmachine: (flannel-457876)   </cpu>
	I1028 18:50:21.924654   75305 main.go:141] libmachine: (flannel-457876)   <os>
	I1028 18:50:21.924663   75305 main.go:141] libmachine: (flannel-457876)     <type>hvm</type>
	I1028 18:50:21.924672   75305 main.go:141] libmachine: (flannel-457876)     <boot dev='cdrom'/>
	I1028 18:50:21.924686   75305 main.go:141] libmachine: (flannel-457876)     <boot dev='hd'/>
	I1028 18:50:21.924698   75305 main.go:141] libmachine: (flannel-457876)     <bootmenu enable='no'/>
	I1028 18:50:21.924708   75305 main.go:141] libmachine: (flannel-457876)   </os>
	I1028 18:50:21.924723   75305 main.go:141] libmachine: (flannel-457876)   <devices>
	I1028 18:50:21.924734   75305 main.go:141] libmachine: (flannel-457876)     <disk type='file' device='cdrom'>
	I1028 18:50:21.924747   75305 main.go:141] libmachine: (flannel-457876)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/boot2docker.iso'/>
	I1028 18:50:21.924761   75305 main.go:141] libmachine: (flannel-457876)       <target dev='hdc' bus='scsi'/>
	I1028 18:50:21.924773   75305 main.go:141] libmachine: (flannel-457876)       <readonly/>
	I1028 18:50:21.924779   75305 main.go:141] libmachine: (flannel-457876)     </disk>
	I1028 18:50:21.924790   75305 main.go:141] libmachine: (flannel-457876)     <disk type='file' device='disk'>
	I1028 18:50:21.924801   75305 main.go:141] libmachine: (flannel-457876)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 18:50:21.924815   75305 main.go:141] libmachine: (flannel-457876)       <source file='/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/flannel-457876.rawdisk'/>
	I1028 18:50:21.924832   75305 main.go:141] libmachine: (flannel-457876)       <target dev='hda' bus='virtio'/>
	I1028 18:50:21.924842   75305 main.go:141] libmachine: (flannel-457876)     </disk>
	I1028 18:50:21.924853   75305 main.go:141] libmachine: (flannel-457876)     <interface type='network'>
	I1028 18:50:21.924865   75305 main.go:141] libmachine: (flannel-457876)       <source network='mk-flannel-457876'/>
	I1028 18:50:21.924875   75305 main.go:141] libmachine: (flannel-457876)       <model type='virtio'/>
	I1028 18:50:21.924885   75305 main.go:141] libmachine: (flannel-457876)     </interface>
	I1028 18:50:21.924892   75305 main.go:141] libmachine: (flannel-457876)     <interface type='network'>
	I1028 18:50:21.924923   75305 main.go:141] libmachine: (flannel-457876)       <source network='default'/>
	I1028 18:50:21.924948   75305 main.go:141] libmachine: (flannel-457876)       <model type='virtio'/>
	I1028 18:50:21.924962   75305 main.go:141] libmachine: (flannel-457876)     </interface>
	I1028 18:50:21.924976   75305 main.go:141] libmachine: (flannel-457876)     <serial type='pty'>
	I1028 18:50:21.924987   75305 main.go:141] libmachine: (flannel-457876)       <target port='0'/>
	I1028 18:50:21.924996   75305 main.go:141] libmachine: (flannel-457876)     </serial>
	I1028 18:50:21.925004   75305 main.go:141] libmachine: (flannel-457876)     <console type='pty'>
	I1028 18:50:21.925014   75305 main.go:141] libmachine: (flannel-457876)       <target type='serial' port='0'/>
	I1028 18:50:21.925034   75305 main.go:141] libmachine: (flannel-457876)     </console>
	I1028 18:50:21.925042   75305 main.go:141] libmachine: (flannel-457876)     <rng model='virtio'>
	I1028 18:50:21.925068   75305 main.go:141] libmachine: (flannel-457876)       <backend model='random'>/dev/random</backend>
	I1028 18:50:21.925090   75305 main.go:141] libmachine: (flannel-457876)     </rng>
	I1028 18:50:21.925103   75305 main.go:141] libmachine: (flannel-457876)     
	I1028 18:50:21.925112   75305 main.go:141] libmachine: (flannel-457876)     
	I1028 18:50:21.925124   75305 main.go:141] libmachine: (flannel-457876)   </devices>
	I1028 18:50:21.925133   75305 main.go:141] libmachine: (flannel-457876) </domain>
	I1028 18:50:21.925144   75305 main.go:141] libmachine: (flannel-457876) 
	I1028 18:50:21.931796   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:77:38:d7 in network default
	I1028 18:50:21.932368   75305 main.go:141] libmachine: (flannel-457876) Ensuring networks are active...
	I1028 18:50:21.932388   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:21.933136   75305 main.go:141] libmachine: (flannel-457876) Ensuring network default is active
	I1028 18:50:21.933540   75305 main.go:141] libmachine: (flannel-457876) Ensuring network mk-flannel-457876 is active
	I1028 18:50:21.934177   75305 main.go:141] libmachine: (flannel-457876) Getting domain xml...
	I1028 18:50:21.935056   75305 main.go:141] libmachine: (flannel-457876) Creating domain...
	I1028 18:50:23.293281   75305 main.go:141] libmachine: (flannel-457876) Waiting to get IP...
	I1028 18:50:23.294327   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:23.294877   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:23.294925   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:23.294871   75693 retry.go:31] will retry after 235.240589ms: waiting for machine to come up
	I1028 18:50:23.532366   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:23.533018   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:23.533043   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:23.532985   75693 retry.go:31] will retry after 377.512988ms: waiting for machine to come up
	I1028 18:50:23.912309   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:23.912778   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:23.912801   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:23.912736   75693 retry.go:31] will retry after 323.098639ms: waiting for machine to come up
	I1028 18:50:24.237051   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:24.237605   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:24.237635   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:24.237566   75693 retry.go:31] will retry after 481.539976ms: waiting for machine to come up
	I1028 18:50:22.931226   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetIP
	I1028 18:50:22.936386   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:22.937448   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:22.937484   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:22.937752   74640 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:50:22.942147   74640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:50:22.956677   74640 kubeadm.go:883] updating cluster {Name:kindnet-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:kindnet-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:50:22.956786   74640 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:50:22.956836   74640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:50:22.994507   74640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:50:22.994579   74640 ssh_runner.go:195] Run: which lz4
	I1028 18:50:22.998755   74640 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:50:23.003075   74640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:50:23.003103   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:50:24.459445   74640 crio.go:462] duration metric: took 1.460711391s to copy over tarball
	I1028 18:50:24.459530   74640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:50:26.779677   74640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.320113407s)
	I1028 18:50:26.779717   74640 crio.go:469] duration metric: took 2.320242331s to extract the tarball
	I1028 18:50:26.779731   74640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:50:26.832100   74640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:50:26.882198   74640 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:50:26.882226   74640 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:50:26.882235   74640 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.31.2 crio true true} ...
	I1028 18:50:26.882352   74640 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-457876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kindnet-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1028 18:50:26.882440   74640 ssh_runner.go:195] Run: crio config
	I1028 18:50:26.932436   74640 cni.go:84] Creating CNI manager for "kindnet"
	I1028 18:50:26.932463   74640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:50:26.932511   74640 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-457876 NodeName:kindnet-457876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:50:26.932649   74640 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-457876"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:50:26.932707   74640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:50:26.944294   74640 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:50:26.944359   74640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:50:26.955365   74640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 18:50:26.973357   74640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:50:26.992534   74640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I1028 18:50:27.009408   74640 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I1028 18:50:27.013410   74640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:50:27.026549   74640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:50:27.176447   74640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:50:27.193179   74640 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876 for IP: 192.168.61.41
	I1028 18:50:27.193202   74640 certs.go:194] generating shared ca certs ...
	I1028 18:50:27.193221   74640 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:27.193414   74640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:50:27.193466   74640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:50:27.193478   74640 certs.go:256] generating profile certs ...
	I1028 18:50:27.193548   74640 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/client.key
	I1028 18:50:27.193563   74640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/client.crt with IP's: []
	I1028 18:50:27.489430   74640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/client.crt ...
	I1028 18:50:27.489462   74640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/client.crt: {Name:mk76e6edde0bf90124cf285632eb367c60e61d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:27.489659   74640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/client.key ...
	I1028 18:50:27.489675   74640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/client.key: {Name:mk45c17a3de4619284ee0795b339e6bce5ef55bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:27.489761   74640 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.key.7e638638
	I1028 18:50:27.489780   74640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.crt.7e638638 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.41]
	I1028 18:50:27.786194   74640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.crt.7e638638 ...
	I1028 18:50:27.786225   74640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.crt.7e638638: {Name:mk81804a2e111316c5781b04bbcea94997a58674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:27.786405   74640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.key.7e638638 ...
	I1028 18:50:27.786417   74640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.key.7e638638: {Name:mk5cac1a39d97c43a065865284965192aac63e57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:27.786490   74640 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.crt.7e638638 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.crt
	I1028 18:50:27.786558   74640 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.key.7e638638 -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.key
	I1028 18:50:27.786608   74640 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.key
	I1028 18:50:27.786629   74640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.crt with IP's: []
	I1028 18:50:28.008286   74640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.crt ...
	I1028 18:50:28.008316   74640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.crt: {Name:mk80ea1c8d5032ccc49644af0b9aa98b639c2b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:28.008493   74640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.key ...
	I1028 18:50:28.008508   74640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.key: {Name:mk69df4029c885d0616fecd892caba654549b64e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:28.008713   74640 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:50:28.008759   74640 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:50:28.008774   74640 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:50:28.008807   74640 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:50:28.008842   74640 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:50:28.008874   74640 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:50:28.008928   74640 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:50:28.009495   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:50:28.045208   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:50:28.072645   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:50:28.099065   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:50:28.122304   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 18:50:28.150593   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:50:28.263358   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:50:28.294144   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/kindnet-457876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:50:28.323155   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:50:28.352570   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:50:28.381682   74640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:50:28.414016   74640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:50:28.433406   74640 ssh_runner.go:195] Run: openssl version
	I1028 18:50:28.439337   74640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:50:28.450275   74640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:50:28.454957   74640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:50:28.455018   74640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:50:28.461075   74640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:50:28.472103   74640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:50:28.483671   74640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:50:28.488681   74640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:50:28.488741   74640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:50:28.496353   74640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:50:28.510537   74640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:50:28.523963   74640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:50:28.528659   74640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:50:28.528722   74640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:50:28.540092   74640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:50:28.552608   74640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:50:28.557981   74640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 18:50:28.558033   74640 kubeadm.go:392] StartCluster: {Name:kindnet-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:kindnet-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:50:28.558121   74640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:50:28.558170   74640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:50:28.601226   74640 cri.go:89] found id: ""
	I1028 18:50:28.601289   74640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:50:28.614766   74640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:50:28.627805   74640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:50:28.641832   74640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:50:28.641859   74640 kubeadm.go:157] found existing configuration files:
	
	I1028 18:50:28.641915   74640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:50:28.652180   74640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:50:28.652235   74640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:50:28.664207   74640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:50:28.676092   74640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:50:28.676153   74640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:50:28.687839   74640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:50:28.699551   74640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:50:28.699603   74640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:50:28.710680   74640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:50:28.720077   74640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:50:28.720141   74640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:50:28.729232   74640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:50:28.782880   74640 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:50:28.782962   74640 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:50:28.892890   74640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:50:28.893006   74640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:50:28.893118   74640 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:50:28.904247   74640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:50:24.478302   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:26.978094   74377 pod_ready.go:98] pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:26 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.36 HostIPs:[{IP:192.168.50.
36}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-28 18:50:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-28 18:50:16 +0000 UTC,FinishedAt:2024-10-28 18:50:26 +0000 UTC,ContainerID:cri-o://bfb2b21143aebd500f77476a49715bae0fc9c52069c0e11db1e99822220d02ee,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bfb2b21143aebd500f77476a49715bae0fc9c52069c0e11db1e99822220d02ee Started:0xc00203f610 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001983db0} {Name:kube-api-access-clktt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001983dc0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1028 18:50:26.978127   74377 pod_ready.go:82] duration metric: took 11.00692634s for pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace to be "Ready" ...
	E1028 18:50:26.978145   74377 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-5272q" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:26 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-28 18:50:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.36 HostIPs:[{IP:192.168.50.36}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-28 18:50:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-28 18:50:16 +0000 UTC,FinishedAt:2024-10-28 18:50:26 +0000 UTC,ContainerID:cri-o://bfb2b21143aebd500f77476a49715bae0fc9c52069c0e11db1e99822220d02ee,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bfb2b21143aebd500f77476a49715bae0fc9c52069c0e11db1e99822220d02ee Started:0xc00203f610 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001983db0} {Name:kube-api-access-clktt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc001983dc0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1028 18:50:26.978157   74377 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:24.720344   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:24.720891   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:24.720921   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:24.720836   75693 retry.go:31] will retry after 549.271966ms: waiting for machine to come up
	I1028 18:50:25.271235   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:25.271742   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:25.271788   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:25.271699   75693 retry.go:31] will retry after 622.790846ms: waiting for machine to come up
	I1028 18:50:25.896534   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:25.896977   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:25.897004   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:25.896937   75693 retry.go:31] will retry after 784.670944ms: waiting for machine to come up
	I1028 18:50:26.683173   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:26.683672   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:26.683697   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:26.683646   75693 retry.go:31] will retry after 953.363831ms: waiting for machine to come up
	I1028 18:50:27.638691   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:27.639152   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:27.639174   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:27.639097   75693 retry.go:31] will retry after 1.649344555s: waiting for machine to come up
	I1028 18:50:29.289510   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:29.289958   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:29.289986   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:29.289909   75693 retry.go:31] will retry after 1.936160554s: waiting for machine to come up
	I1028 18:50:28.987375   74640 out.go:235]   - Generating certificates and keys ...
	I1028 18:50:28.987501   74640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:50:28.987594   74640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:50:28.989181   74640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 18:50:29.203479   74640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 18:50:29.360548   74640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 18:50:29.583832   74640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 18:50:29.879401   74640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 18:50:29.879614   74640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-457876 localhost] and IPs [192.168.61.41 127.0.0.1 ::1]
	I1028 18:50:30.210945   74640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 18:50:30.211142   74640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-457876 localhost] and IPs [192.168.61.41 127.0.0.1 ::1]
	I1028 18:50:30.347976   74640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 18:50:30.695349   74640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 18:50:30.870288   74640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 18:50:30.870376   74640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:50:30.967361   74640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:50:31.036323   74640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:50:31.168970   74640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:50:31.287858   74640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:50:31.467794   74640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:50:31.468510   74640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:50:31.471429   74640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:50:29.617601   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:31.986895   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:33.989830   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:31.227940   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:31.228427   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:31.228456   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:31.228394   75693 retry.go:31] will retry after 2.202139335s: waiting for machine to come up
	I1028 18:50:33.433132   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:33.433929   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:33.433957   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:33.433890   75693 retry.go:31] will retry after 3.469180606s: waiting for machine to come up
	I1028 18:50:31.473300   74640 out.go:235]   - Booting up control plane ...
	I1028 18:50:31.473418   74640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:50:31.473536   74640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:50:31.473659   74640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:50:31.494411   74640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:50:31.506482   74640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:50:31.506549   74640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:50:31.673033   74640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:50:31.673179   74640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:50:32.170990   74640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.876433ms
	I1028 18:50:32.171133   74640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:50:37.672146   74640 kubeadm.go:310] [api-check] The API server is healthy after 5.504184107s
	I1028 18:50:37.693740   74640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:50:37.719221   74640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:50:37.757706   74640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:50:37.757960   74640 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-457876 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:50:37.776780   74640 kubeadm.go:310] [bootstrap-token] Using token: 9ew43v.r0h8e2x276cse2yi
	I1028 18:50:37.778186   74640 out.go:235]   - Configuring RBAC rules ...
	I1028 18:50:37.778326   74640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:50:37.788182   74640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:50:37.799080   74640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:50:37.803123   74640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:50:37.812271   74640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:50:37.820780   74640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:50:38.078472   74640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:50:38.501314   74640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:50:39.077955   74640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:50:39.078868   74640 kubeadm.go:310] 
	I1028 18:50:39.078935   74640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:50:39.078950   74640 kubeadm.go:310] 
	I1028 18:50:39.079054   74640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:50:39.079065   74640 kubeadm.go:310] 
	I1028 18:50:39.079118   74640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:50:39.079193   74640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:50:39.079288   74640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:50:39.079303   74640 kubeadm.go:310] 
	I1028 18:50:39.079354   74640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:50:39.079360   74640 kubeadm.go:310] 
	I1028 18:50:39.079414   74640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:50:39.079424   74640 kubeadm.go:310] 
	I1028 18:50:39.079499   74640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:50:39.079617   74640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:50:39.079731   74640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:50:39.079747   74640 kubeadm.go:310] 
	I1028 18:50:39.079878   74640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:50:39.079998   74640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:50:39.080012   74640 kubeadm.go:310] 
	I1028 18:50:39.080129   74640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9ew43v.r0h8e2x276cse2yi \
	I1028 18:50:39.080289   74640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:50:39.080325   74640 kubeadm.go:310] 	--control-plane 
	I1028 18:50:39.080335   74640 kubeadm.go:310] 
	I1028 18:50:39.080462   74640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:50:39.080497   74640 kubeadm.go:310] 
	I1028 18:50:39.080624   74640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9ew43v.r0h8e2x276cse2yi \
	I1028 18:50:39.080765   74640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:50:39.081382   74640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:50:39.081465   74640 cni.go:84] Creating CNI manager for "kindnet"
	I1028 18:50:39.083303   74640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 18:50:36.484574   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:38.485416   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:36.905245   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:36.905791   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:36.905832   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:36.905714   75693 retry.go:31] will retry after 4.123811398s: waiting for machine to come up
	I1028 18:50:39.084627   74640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 18:50:39.091336   74640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 18:50:39.091355   74640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 18:50:39.113649   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 18:50:39.365809   74640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:50:39.365900   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:39.365931   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-457876 minikube.k8s.io/updated_at=2024_10_28T18_50_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=kindnet-457876 minikube.k8s.io/primary=true
	I1028 18:50:39.528757   74640 ops.go:34] apiserver oom_adj: -16
	I1028 18:50:39.528893   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:40.029683   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:40.528977   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:41.029672   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:41.529261   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:42.029640   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:42.529679   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:43.029343   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:43.529867   74640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:50:43.613875   74640 kubeadm.go:1113] duration metric: took 4.248039091s to wait for elevateKubeSystemPrivileges
	I1028 18:50:43.613911   74640 kubeadm.go:394] duration metric: took 15.055882024s to StartCluster
	I1028 18:50:43.613932   74640 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:43.614020   74640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:50:43.615111   74640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:43.615322   74640 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:50:43.615329   74640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 18:50:43.615418   74640 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:50:43.615527   74640 config.go:182] Loaded profile config "kindnet-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:50:43.615537   74640 addons.go:69] Setting storage-provisioner=true in profile "kindnet-457876"
	I1028 18:50:43.615559   74640 addons.go:234] Setting addon storage-provisioner=true in "kindnet-457876"
	I1028 18:50:43.615575   74640 addons.go:69] Setting default-storageclass=true in profile "kindnet-457876"
	I1028 18:50:43.615594   74640 host.go:66] Checking if "kindnet-457876" exists ...
	I1028 18:50:43.615600   74640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-457876"
	I1028 18:50:43.616051   74640 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:43.616087   74640 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:43.616102   74640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:43.616124   74640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:43.616996   74640 out.go:177] * Verifying Kubernetes components...
	I1028 18:50:43.618654   74640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:50:43.632551   74640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36497
	I1028 18:50:43.632553   74640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
	I1028 18:50:43.633027   74640 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:43.633029   74640 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:43.633513   74640 main.go:141] libmachine: Using API Version  1
	I1028 18:50:43.633532   74640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:43.633656   74640 main.go:141] libmachine: Using API Version  1
	I1028 18:50:43.633680   74640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:43.633874   74640 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:43.633995   74640 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:43.634122   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetState
	I1028 18:50:43.634613   74640 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:43.634650   74640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:43.637531   74640 addons.go:234] Setting addon default-storageclass=true in "kindnet-457876"
	I1028 18:50:43.637570   74640 host.go:66] Checking if "kindnet-457876" exists ...
	I1028 18:50:43.637857   74640 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:43.637883   74640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:43.650545   74640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I1028 18:50:43.650954   74640 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:43.651538   74640 main.go:141] libmachine: Using API Version  1
	I1028 18:50:43.651562   74640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:43.651898   74640 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:43.652206   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetState
	I1028 18:50:43.653261   74640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I1028 18:50:43.653652   74640 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:43.654150   74640 main.go:141] libmachine: Using API Version  1
	I1028 18:50:43.654175   74640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:43.654196   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:43.654492   74640 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:43.655108   74640 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:50:43.655139   74640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:50:43.656658   74640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:50:40.984118   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:42.985149   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:41.033602   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:41.034082   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find current IP address of domain flannel-457876 in network mk-flannel-457876
	I1028 18:50:41.034106   75305 main.go:141] libmachine: (flannel-457876) DBG | I1028 18:50:41.034002   75693 retry.go:31] will retry after 5.591254778s: waiting for machine to come up
	I1028 18:50:43.658054   74640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:50:43.658076   74640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:50:43.658096   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:43.660979   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:43.661517   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:43.661545   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:43.661713   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:43.661894   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:43.662059   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:43.662298   74640 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa Username:docker}
	I1028 18:50:43.669672   74640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35205
	I1028 18:50:43.670144   74640 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:50:43.670620   74640 main.go:141] libmachine: Using API Version  1
	I1028 18:50:43.670638   74640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:50:43.670954   74640 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:50:43.671181   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetState
	I1028 18:50:43.672962   74640 main.go:141] libmachine: (kindnet-457876) Calling .DriverName
	I1028 18:50:43.673343   74640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:50:43.673356   74640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:50:43.673368   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHHostname
	I1028 18:50:43.675945   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:43.676217   74640 main.go:141] libmachine: (kindnet-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:e5:15", ip: ""} in network mk-kindnet-457876: {Iface:virbr3 ExpiryTime:2024-10-28 19:50:09 +0000 UTC Type:0 Mac:52:54:00:cd:e5:15 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:kindnet-457876 Clientid:01:52:54:00:cd:e5:15}
	I1028 18:50:43.676362   74640 main.go:141] libmachine: (kindnet-457876) DBG | domain kindnet-457876 has defined IP address 192.168.61.41 and MAC address 52:54:00:cd:e5:15 in network mk-kindnet-457876
	I1028 18:50:43.676389   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHPort
	I1028 18:50:43.676599   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHKeyPath
	I1028 18:50:43.676710   74640 main.go:141] libmachine: (kindnet-457876) Calling .GetSSHUsername
	I1028 18:50:43.676829   74640 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/kindnet-457876/id_rsa Username:docker}
	I1028 18:50:43.831456   74640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:50:43.831815   74640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 18:50:43.862438   74640 node_ready.go:35] waiting up to 15m0s for node "kindnet-457876" to be "Ready" ...
	I1028 18:50:44.001044   74640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:50:44.003478   74640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:50:44.321189   74640 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1028 18:50:44.321248   74640 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:44.321268   74640 main.go:141] libmachine: (kindnet-457876) Calling .Close
	I1028 18:50:44.321575   74640 main.go:141] libmachine: (kindnet-457876) DBG | Closing plugin on server side
	I1028 18:50:44.321608   74640 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:44.321621   74640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:44.321630   74640 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:44.321643   74640 main.go:141] libmachine: (kindnet-457876) Calling .Close
	I1028 18:50:44.321873   74640 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:44.321887   74640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:44.321889   74640 main.go:141] libmachine: (kindnet-457876) DBG | Closing plugin on server side
	I1028 18:50:44.336584   74640 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:44.336602   74640 main.go:141] libmachine: (kindnet-457876) Calling .Close
	I1028 18:50:44.336872   74640 main.go:141] libmachine: (kindnet-457876) DBG | Closing plugin on server side
	I1028 18:50:44.336904   74640 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:44.336924   74640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:44.773767   74640 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:44.773790   74640 main.go:141] libmachine: (kindnet-457876) Calling .Close
	I1028 18:50:44.774108   74640 main.go:141] libmachine: (kindnet-457876) DBG | Closing plugin on server side
	I1028 18:50:44.774179   74640 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:44.774193   74640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:44.774206   74640 main.go:141] libmachine: Making call to close driver server
	I1028 18:50:44.774220   74640 main.go:141] libmachine: (kindnet-457876) Calling .Close
	I1028 18:50:44.774483   74640 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:50:44.774509   74640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:50:44.774511   74640 main.go:141] libmachine: (kindnet-457876) DBG | Closing plugin on server side
	I1028 18:50:44.775949   74640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 18:50:44.777034   74640 addons.go:510] duration metric: took 1.161618358s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 18:50:44.825957   74640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-457876" context rescaled to 1 replicas
	I1028 18:50:44.985192   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:46.985751   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:46.627367   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.627825   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has current primary IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.627842   75305 main.go:141] libmachine: (flannel-457876) Found IP for machine: 192.168.72.134
	I1028 18:50:46.627850   75305 main.go:141] libmachine: (flannel-457876) Reserving static IP address...
	I1028 18:50:46.628199   75305 main.go:141] libmachine: (flannel-457876) DBG | unable to find host DHCP lease matching {name: "flannel-457876", mac: "52:54:00:57:59:d4", ip: "192.168.72.134"} in network mk-flannel-457876
	I1028 18:50:46.702230   75305 main.go:141] libmachine: (flannel-457876) DBG | Getting to WaitForSSH function...
	I1028 18:50:46.702256   75305 main.go:141] libmachine: (flannel-457876) Reserved static IP address: 192.168.72.134
	I1028 18:50:46.702271   75305 main.go:141] libmachine: (flannel-457876) Waiting for SSH to be available...
	I1028 18:50:46.704854   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.705265   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:46.705293   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.705478   75305 main.go:141] libmachine: (flannel-457876) DBG | Using SSH client type: external
	I1028 18:50:46.705512   75305 main.go:141] libmachine: (flannel-457876) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa (-rw-------)
	I1028 18:50:46.705539   75305 main.go:141] libmachine: (flannel-457876) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:50:46.705565   75305 main.go:141] libmachine: (flannel-457876) DBG | About to run SSH command:
	I1028 18:50:46.705596   75305 main.go:141] libmachine: (flannel-457876) DBG | exit 0
	I1028 18:50:46.836522   75305 main.go:141] libmachine: (flannel-457876) DBG | SSH cmd err, output: <nil>: 
	I1028 18:50:46.836795   75305 main.go:141] libmachine: (flannel-457876) KVM machine creation complete!
	I1028 18:50:46.837104   75305 main.go:141] libmachine: (flannel-457876) Calling .GetConfigRaw
	I1028 18:50:46.837608   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:46.837809   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:46.837961   75305 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 18:50:46.837986   75305 main.go:141] libmachine: (flannel-457876) Calling .GetState
	I1028 18:50:46.839290   75305 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 18:50:46.839306   75305 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 18:50:46.839314   75305 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 18:50:46.839322   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:46.841696   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.842051   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:46.842078   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.842274   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:46.842449   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:46.842604   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:46.842780   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:46.842966   75305 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:46.843144   75305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1028 18:50:46.843155   75305 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 18:50:46.955608   75305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:50:46.955632   75305 main.go:141] libmachine: Detecting the provisioner...
	I1028 18:50:46.955642   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:46.958471   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.958800   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:46.958831   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:46.959018   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:46.959208   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:46.959390   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:46.959529   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:46.959701   75305 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:46.959906   75305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1028 18:50:46.959920   75305 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 18:50:47.073195   75305 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 18:50:47.073269   75305 main.go:141] libmachine: found compatible host: buildroot
	I1028 18:50:47.073283   75305 main.go:141] libmachine: Provisioning with buildroot...
	I1028 18:50:47.073292   75305 main.go:141] libmachine: (flannel-457876) Calling .GetMachineName
	I1028 18:50:47.073530   75305 buildroot.go:166] provisioning hostname "flannel-457876"
	I1028 18:50:47.073556   75305 main.go:141] libmachine: (flannel-457876) Calling .GetMachineName
	I1028 18:50:47.073739   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:47.076110   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.076394   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:47.076422   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.076528   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:47.076703   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:47.076847   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:47.077003   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:47.077235   75305 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:47.077428   75305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1028 18:50:47.077441   75305 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-457876 && echo "flannel-457876" | sudo tee /etc/hostname
	I1028 18:50:47.206630   75305 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-457876
	
	I1028 18:50:47.206667   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:47.209538   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.209954   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:47.209981   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.210134   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:47.210311   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:47.210465   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:47.210627   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:47.210817   75305 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:47.211019   75305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1028 18:50:47.211049   75305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-457876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-457876/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-457876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:50:47.332995   75305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:50:47.333052   75305 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:50:47.333108   75305 buildroot.go:174] setting up certificates
	I1028 18:50:47.333117   75305 provision.go:84] configureAuth start
	I1028 18:50:47.333129   75305 main.go:141] libmachine: (flannel-457876) Calling .GetMachineName
	I1028 18:50:47.333431   75305 main.go:141] libmachine: (flannel-457876) Calling .GetIP
	I1028 18:50:47.336223   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.336660   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:47.336686   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.336897   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:47.339072   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.339476   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:47.339513   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.339671   75305 provision.go:143] copyHostCerts
	I1028 18:50:47.339729   75305 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:50:47.339742   75305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:50:47.339796   75305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:50:47.339878   75305 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:50:47.339887   75305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:50:47.339906   75305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:50:47.339955   75305 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:50:47.339980   75305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:50:47.340013   75305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:50:47.340097   75305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.flannel-457876 san=[127.0.0.1 192.168.72.134 flannel-457876 localhost minikube]
	I1028 18:50:47.679353   75305 provision.go:177] copyRemoteCerts
	I1028 18:50:47.679412   75305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:50:47.679435   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:47.682056   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.682423   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:47.682449   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.682625   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:47.682827   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:47.682992   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:47.683140   75305 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa Username:docker}
	I1028 18:50:47.770678   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:50:47.794273   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1028 18:50:47.818853   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:50:47.842659   75305 provision.go:87] duration metric: took 509.527858ms to configureAuth
	I1028 18:50:47.842688   75305 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:50:47.842858   75305 config.go:182] Loaded profile config "flannel-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:50:47.842926   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:47.845762   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.846145   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:47.846175   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:47.846328   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:47.846524   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:47.846739   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:47.846858   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:47.847022   75305 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:47.847249   75305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1028 18:50:47.847266   75305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:50:48.087939   75305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:50:48.087962   75305 main.go:141] libmachine: Checking connection to Docker...
	I1028 18:50:48.087971   75305 main.go:141] libmachine: (flannel-457876) Calling .GetURL
	I1028 18:50:48.089268   75305 main.go:141] libmachine: (flannel-457876) DBG | Using libvirt version 6000000
	I1028 18:50:48.091751   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.092068   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:48.092099   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.092290   75305 main.go:141] libmachine: Docker is up and running!
	I1028 18:50:48.092308   75305 main.go:141] libmachine: Reticulating splines...
	I1028 18:50:48.092316   75305 client.go:171] duration metric: took 26.594448167s to LocalClient.Create
	I1028 18:50:48.092343   75305 start.go:167] duration metric: took 26.594516596s to libmachine.API.Create "flannel-457876"
	I1028 18:50:48.092354   75305 start.go:293] postStartSetup for "flannel-457876" (driver="kvm2")
	I1028 18:50:48.092367   75305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:50:48.092389   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:48.092652   75305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:50:48.092681   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:48.095309   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.095716   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:48.095742   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.095914   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:48.096083   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:48.096241   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:48.096440   75305 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa Username:docker}
	I1028 18:50:48.183010   75305 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:50:48.187189   75305 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:50:48.187209   75305 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:50:48.187268   75305 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:50:48.187374   75305 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:50:48.187493   75305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:50:48.197052   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:50:48.219946   75305 start.go:296] duration metric: took 127.578512ms for postStartSetup
	I1028 18:50:48.220007   75305 main.go:141] libmachine: (flannel-457876) Calling .GetConfigRaw
	I1028 18:50:48.220587   75305 main.go:141] libmachine: (flannel-457876) Calling .GetIP
	I1028 18:50:48.222903   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.223216   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:48.223273   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.223441   75305 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/config.json ...
	I1028 18:50:48.223601   75305 start.go:128] duration metric: took 26.749976422s to createHost
	I1028 18:50:48.223621   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:48.226044   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.226372   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:48.226404   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.226538   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:48.226716   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:48.226863   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:48.227004   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:48.227126   75305 main.go:141] libmachine: Using SSH client type: native
	I1028 18:50:48.227301   75305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1028 18:50:48.227313   75305 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:50:48.345383   75305 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730141448.301190429
	
	I1028 18:50:48.345407   75305 fix.go:216] guest clock: 1730141448.301190429
	I1028 18:50:48.345417   75305 fix.go:229] Guest: 2024-10-28 18:50:48.301190429 +0000 UTC Remote: 2024-10-28 18:50:48.223611542 +0000 UTC m=+58.853482726 (delta=77.578887ms)
	I1028 18:50:48.345458   75305 fix.go:200] guest clock delta is within tolerance: 77.578887ms
	I1028 18:50:48.345466   75305 start.go:83] releasing machines lock for "flannel-457876", held for 26.871999716s
	I1028 18:50:48.345492   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:48.345764   75305 main.go:141] libmachine: (flannel-457876) Calling .GetIP
	I1028 18:50:48.348367   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.348742   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:48.348765   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.348939   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:48.349424   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:48.349599   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:50:48.349682   75305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:50:48.349728   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:48.349788   75305 ssh_runner.go:195] Run: cat /version.json
	I1028 18:50:48.349816   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:50:48.352145   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.352313   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.352539   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:48.352564   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.352723   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:48.352721   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:48.352754   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:48.352894   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:50:48.352916   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:48.353046   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:50:48.353157   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:48.353233   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:50:48.353286   75305 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa Username:docker}
	I1028 18:50:48.353341   75305 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa Username:docker}
	I1028 18:50:48.433061   75305 ssh_runner.go:195] Run: systemctl --version
	I1028 18:50:48.459722   75305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:50:48.624109   75305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:50:48.629875   75305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:50:48.629930   75305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:50:48.647333   75305 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:50:48.647353   75305 start.go:495] detecting cgroup driver to use...
	I1028 18:50:48.647405   75305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:50:48.667540   75305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:50:48.682553   75305 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:50:48.682602   75305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:50:48.695705   75305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:50:48.709182   75305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:50:48.820652   75305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:50:48.982543   75305 docker.go:233] disabling docker service ...
	I1028 18:50:48.982613   75305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:50:48.997426   75305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:50:49.010965   75305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:50:49.151034   75305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:50:49.272299   75305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:50:49.286130   75305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:50:49.304292   75305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:50:49.304368   75305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:49.314499   75305 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:50:49.314563   75305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:49.324591   75305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:49.334310   75305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:49.344269   75305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:50:49.354169   75305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:49.363994   75305 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:49.381565   75305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:50:49.391125   75305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:50:49.399950   75305 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:50:49.400003   75305 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:50:49.412151   75305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:50:49.421253   75305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:50:49.538929   75305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:50:49.647312   75305 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:50:49.647397   75305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:50:49.652886   75305 start.go:563] Will wait 60s for crictl version
	I1028 18:50:49.652938   75305 ssh_runner.go:195] Run: which crictl
	I1028 18:50:49.656591   75305 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:50:49.692869   75305 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:50:49.692955   75305 ssh_runner.go:195] Run: crio --version
	I1028 18:50:49.721826   75305 ssh_runner.go:195] Run: crio --version
	I1028 18:50:49.752087   75305 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:50:45.866395   74640 node_ready.go:53] node "kindnet-457876" has status "Ready":"False"
	I1028 18:50:47.866457   74640 node_ready.go:53] node "kindnet-457876" has status "Ready":"False"
	I1028 18:50:49.484785   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:51.989317   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:49.753335   75305 main.go:141] libmachine: (flannel-457876) Calling .GetIP
	I1028 18:50:49.755784   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:49.756057   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:50:49.756087   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:50:49.756271   75305 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 18:50:49.760200   75305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:50:49.773355   75305 kubeadm.go:883] updating cluster {Name:flannel-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:flannel-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:50:49.773449   75305 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:50:49.773489   75305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:50:49.805084   75305 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:50:49.805144   75305 ssh_runner.go:195] Run: which lz4
	I1028 18:50:49.809200   75305 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:50:49.813412   75305 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:50:49.813443   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:50:51.201080   75305 crio.go:462] duration metric: took 1.391899945s to copy over tarball
	I1028 18:50:51.201174   75305 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:50:53.450165   75305 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.248964451s)
	I1028 18:50:53.450191   75305 crio.go:469] duration metric: took 2.249082216s to extract the tarball
	I1028 18:50:53.450198   75305 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:50:53.488422   75305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:50:53.533520   75305 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:50:53.533547   75305 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:50:53.533556   75305 kubeadm.go:934] updating node { 192.168.72.134 8443 v1.31.2 crio true true} ...
	I1028 18:50:53.533649   75305 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-457876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:flannel-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1028 18:50:53.533727   75305 ssh_runner.go:195] Run: crio config
	I1028 18:50:53.581788   75305 cni.go:84] Creating CNI manager for "flannel"
	I1028 18:50:53.581812   75305 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:50:53.581831   75305 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-457876 NodeName:flannel-457876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:50:53.581962   75305 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-457876"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.134"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:50:53.582031   75305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:50:53.592517   75305 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:50:53.592578   75305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:50:53.602190   75305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1028 18:50:53.620464   75305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:50:53.638054   75305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1028 18:50:53.656250   75305 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1028 18:50:53.660325   75305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:50:53.672761   75305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:50:53.797383   75305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:50:53.813535   75305 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876 for IP: 192.168.72.134
	I1028 18:50:53.813561   75305 certs.go:194] generating shared ca certs ...
	I1028 18:50:53.813581   75305 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:53.813765   75305 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:50:53.813821   75305 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:50:53.813834   75305 certs.go:256] generating profile certs ...
	I1028 18:50:53.813898   75305 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/client.key
	I1028 18:50:53.813916   75305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/client.crt with IP's: []
	I1028 18:50:53.949037   75305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/client.crt ...
	I1028 18:50:53.949069   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/client.crt: {Name:mk4e009b43b54ad432c8cee1027f3699fa28b273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:53.949271   75305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/client.key ...
	I1028 18:50:53.949311   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/client.key: {Name:mk84997eb1239bf8c64edde9cf74edfa299c9b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:53.949412   75305 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.key.b71b9dec
	I1028 18:50:53.949427   75305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.crt.b71b9dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.134]
	I1028 18:50:54.029083   75305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.crt.b71b9dec ...
	I1028 18:50:54.029115   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.crt.b71b9dec: {Name:mk0de051464e014546858b3f1ab738af6633b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:54.029276   75305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.key.b71b9dec ...
	I1028 18:50:54.029288   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.key.b71b9dec: {Name:mk9c10c4b81bcddc7ca5be8e28985e3d88178f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:54.029362   75305 certs.go:381] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.crt.b71b9dec -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.crt
	I1028 18:50:54.029453   75305 certs.go:385] copying /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.key.b71b9dec -> /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.key
	I1028 18:50:54.029508   75305 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.key
	I1028 18:50:54.029522   75305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.crt with IP's: []
	I1028 18:50:54.126537   75305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.crt ...
	I1028 18:50:54.126563   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.crt: {Name:mkcfd9dfa560de0b48df218cde615dc3af630e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:54.126720   75305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.key ...
	I1028 18:50:54.126733   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.key: {Name:mk45c7b1c45b0d3bb849813c7944a4275d3101f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:50:54.126890   75305 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:50:54.126924   75305 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:50:54.126934   75305 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:50:54.126993   75305 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:50:54.127022   75305 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:50:54.127044   75305 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:50:54.127080   75305 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:50:54.127597   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:50:54.154032   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:50:54.178200   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:50:54.201787   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:50:54.225465   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 18:50:54.248067   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:50:54.273081   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:50:54.297496   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/flannel-457876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:50:54.320882   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:50:54.344950   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:50:54.369595   75305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:50:54.393085   75305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:50:54.411091   75305 ssh_runner.go:195] Run: openssl version
	I1028 18:50:54.417003   75305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:50:54.427479   75305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:50:54.431947   75305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:50:54.432007   75305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:50:54.437657   75305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:50:54.448231   75305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:50:54.458853   75305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:50:54.463504   75305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:50:54.463555   75305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:50:54.469238   75305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:50:54.481075   75305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:50:54.494400   75305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:50:54.504814   75305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:50:54.504863   75305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:50:54.521567   75305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:50:54.535572   75305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:50:54.540817   75305 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 18:50:54.540875   75305 kubeadm.go:392] StartCluster: {Name:flannel-457876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:flannel-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:50:54.540978   75305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:50:54.541027   75305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:50:54.580439   75305 cri.go:89] found id: ""
	I1028 18:50:54.580524   75305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:50:54.592048   75305 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:50:54.601781   75305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:50:54.611113   75305 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:50:54.611139   75305 kubeadm.go:157] found existing configuration files:
	
	I1028 18:50:54.611185   75305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:50:54.620229   75305 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:50:54.620278   75305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:50:54.629516   75305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:50:54.638286   75305 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:50:54.638340   75305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:50:54.647444   75305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:50:54.656225   75305 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:50:54.656278   75305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:50:54.665444   75305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:50:54.674245   75305 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:50:54.674298   75305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:50:54.683250   75305 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:50:54.738538   75305 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:50:54.738687   75305 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:50:54.843815   75305 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:50:54.843971   75305 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:50:54.844123   75305 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:50:54.855743   75305 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:50:50.366081   74640 node_ready.go:53] node "kindnet-457876" has status "Ready":"False"
	I1028 18:50:52.366973   74640 node_ready.go:53] node "kindnet-457876" has status "Ready":"False"
	I1028 18:50:54.867690   74640 node_ready.go:53] node "kindnet-457876" has status "Ready":"False"
	I1028 18:50:54.485062   74377 pod_ready.go:103] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:55.836821   74377 pod_ready.go:93] pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:55.836852   74377 pod_ready.go:82] duration metric: took 28.85868047s for pod "coredns-7c65d6cfc9-sl46v" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:55.836868   74377 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.632645   74377 pod_ready.go:93] pod "etcd-auto-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:56.632666   74377 pod_ready.go:82] duration metric: took 795.79178ms for pod "etcd-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.632677   74377 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.660141   74377 pod_ready.go:93] pod "kube-apiserver-auto-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:56.660169   74377 pod_ready.go:82] duration metric: took 27.484487ms for pod "kube-apiserver-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.660183   74377 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.669368   74377 pod_ready.go:93] pod "kube-controller-manager-auto-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:56.669394   74377 pod_ready.go:82] duration metric: took 9.202459ms for pod "kube-controller-manager-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.669407   74377 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-hgjjx" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.675746   74377 pod_ready.go:93] pod "kube-proxy-hgjjx" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:56.675769   74377 pod_ready.go:82] duration metric: took 6.353986ms for pod "kube-proxy-hgjjx" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.675782   74377 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.682433   74377 pod_ready.go:93] pod "kube-scheduler-auto-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:56.682455   74377 pod_ready.go:82] duration metric: took 6.665857ms for pod "kube-scheduler-auto-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:56.682463   74377 pod_ready.go:39] duration metric: took 40.726467238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:50:56.682481   74377 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:50:56.682538   74377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:50:56.701661   74377 api_server.go:72] duration metric: took 41.751742431s to wait for apiserver process to appear ...
	I1028 18:50:56.701689   74377 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:50:56.701713   74377 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I1028 18:50:56.707714   74377 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I1028 18:50:56.708831   74377 api_server.go:141] control plane version: v1.31.2
	I1028 18:50:56.708854   74377 api_server.go:131] duration metric: took 7.157498ms to wait for apiserver health ...
	I1028 18:50:56.708863   74377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:50:56.715500   74377 system_pods.go:59] 7 kube-system pods found
	I1028 18:50:56.715529   74377 system_pods.go:61] "coredns-7c65d6cfc9-sl46v" [024273c0-3854-4706-8ba9-02c4be99417e] Running
	I1028 18:50:56.715537   74377 system_pods.go:61] "etcd-auto-457876" [2bd5cd58-b8fa-4992-8dd7-a4865e737ac7] Running
	I1028 18:50:56.715543   74377 system_pods.go:61] "kube-apiserver-auto-457876" [1841467c-b594-45f1-a131-9e525a9db55a] Running
	I1028 18:50:56.715548   74377 system_pods.go:61] "kube-controller-manager-auto-457876" [5e9abff3-ec3e-47d9-a072-ab6f3f7f04d2] Running
	I1028 18:50:56.715555   74377 system_pods.go:61] "kube-proxy-hgjjx" [4e800e34-c62f-403d-be48-18cfd539709a] Running
	I1028 18:50:56.715561   74377 system_pods.go:61] "kube-scheduler-auto-457876" [6f887176-9256-46ba-8a23-02b2e00fcd3a] Running
	I1028 18:50:56.715568   74377 system_pods.go:61] "storage-provisioner" [e6145756-7b08-4b5f-893c-be63650c1551] Running
	I1028 18:50:56.715575   74377 system_pods.go:74] duration metric: took 6.704571ms to wait for pod list to return data ...
	I1028 18:50:56.715586   74377 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:50:56.718085   74377 default_sa.go:45] found service account: "default"
	I1028 18:50:56.718104   74377 default_sa.go:55] duration metric: took 2.509564ms for default service account to be created ...
	I1028 18:50:56.718113   74377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:50:57.020420   74377 system_pods.go:86] 7 kube-system pods found
	I1028 18:50:57.020452   74377 system_pods.go:89] "coredns-7c65d6cfc9-sl46v" [024273c0-3854-4706-8ba9-02c4be99417e] Running
	I1028 18:50:57.020459   74377 system_pods.go:89] "etcd-auto-457876" [2bd5cd58-b8fa-4992-8dd7-a4865e737ac7] Running
	I1028 18:50:57.020465   74377 system_pods.go:89] "kube-apiserver-auto-457876" [1841467c-b594-45f1-a131-9e525a9db55a] Running
	I1028 18:50:57.020481   74377 system_pods.go:89] "kube-controller-manager-auto-457876" [5e9abff3-ec3e-47d9-a072-ab6f3f7f04d2] Running
	I1028 18:50:57.020489   74377 system_pods.go:89] "kube-proxy-hgjjx" [4e800e34-c62f-403d-be48-18cfd539709a] Running
	I1028 18:50:57.020497   74377 system_pods.go:89] "kube-scheduler-auto-457876" [6f887176-9256-46ba-8a23-02b2e00fcd3a] Running
	I1028 18:50:57.020503   74377 system_pods.go:89] "storage-provisioner" [e6145756-7b08-4b5f-893c-be63650c1551] Running
	I1028 18:50:57.020513   74377 system_pods.go:126] duration metric: took 302.392024ms to wait for k8s-apps to be running ...
	I1028 18:50:57.020525   74377 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:50:57.020579   74377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:50:57.036559   74377 system_svc.go:56] duration metric: took 16.025744ms WaitForService to wait for kubelet
	I1028 18:50:57.036596   74377 kubeadm.go:582] duration metric: took 42.086681596s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:50:57.036619   74377 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:50:57.040907   74377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:50:57.040935   74377 node_conditions.go:123] node cpu capacity is 2
	I1028 18:50:57.040945   74377 node_conditions.go:105] duration metric: took 4.321577ms to run NodePressure ...
	I1028 18:50:57.040955   74377 start.go:241] waiting for startup goroutines ...
	I1028 18:50:57.040962   74377 start.go:246] waiting for cluster config update ...
	I1028 18:50:57.040971   74377 start.go:255] writing updated cluster config ...
	I1028 18:50:57.080373   74377 ssh_runner.go:195] Run: rm -f paused
	I1028 18:50:57.129888   74377 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:50:57.229816   74377 out.go:177] * Done! kubectl is now configured to use "auto-457876" cluster and "default" namespace by default
	I1028 18:50:55.056858   75305 out.go:235]   - Generating certificates and keys ...
	I1028 18:50:55.056975   75305 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:50:55.057037   75305 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:50:55.290536   75305 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 18:50:55.397009   75305 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 18:50:55.690092   75305 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 18:50:55.774779   75305 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 18:50:56.091643   75305 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 18:50:56.091884   75305 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-457876 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	I1028 18:50:56.202630   75305 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 18:50:56.202773   75305 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-457876 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	I1028 18:50:56.298681   75305 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 18:50:56.604777   75305 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 18:50:56.813007   75305 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 18:50:56.813262   75305 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:50:56.935535   75305 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:50:57.016116   75305 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:50:57.590616   75305 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:50:57.834702   75305 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:50:58.018312   75305 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:50:58.019188   75305 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:50:58.022006   75305 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:50:58.023756   75305 out.go:235]   - Booting up control plane ...
	I1028 18:50:58.023902   75305 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:50:58.024020   75305 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:50:58.024193   75305 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:50:58.051284   75305 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:50:58.063754   75305 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:50:58.063848   75305 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:50:58.219476   75305 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:50:58.219652   75305 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:50:58.721840   75305 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.950457ms
	I1028 18:50:58.721960   75305 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:50:56.505826   74640 node_ready.go:49] node "kindnet-457876" has status "Ready":"True"
	I1028 18:50:56.505858   74640 node_ready.go:38] duration metric: took 12.643391495s for node "kindnet-457876" to be "Ready" ...
	I1028 18:50:56.505870   74640 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:50:56.518506   74640 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-mz2vg" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:58.527392   74640 pod_ready.go:103] pod "coredns-7c65d6cfc9-mz2vg" in "kube-system" namespace has status "Ready":"False"
	I1028 18:50:59.025046   74640 pod_ready.go:93] pod "coredns-7c65d6cfc9-mz2vg" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:59.025076   74640 pod_ready.go:82] duration metric: took 2.506539037s for pod "coredns-7c65d6cfc9-mz2vg" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.025089   74640 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.030583   74640 pod_ready.go:93] pod "etcd-kindnet-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:59.030607   74640 pod_ready.go:82] duration metric: took 5.509672ms for pod "etcd-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.030623   74640 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.035516   74640 pod_ready.go:93] pod "kube-apiserver-kindnet-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:59.035540   74640 pod_ready.go:82] duration metric: took 4.908658ms for pod "kube-apiserver-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.035553   74640 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.041481   74640 pod_ready.go:93] pod "kube-controller-manager-kindnet-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:59.041535   74640 pod_ready.go:82] duration metric: took 5.972038ms for pod "kube-controller-manager-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.041549   74640 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-8tn8h" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.092982   74640 pod_ready.go:93] pod "kube-proxy-8tn8h" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:59.093011   74640 pod_ready.go:82] duration metric: took 51.453567ms for pod "kube-proxy-8tn8h" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.093024   74640 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.491612   74640 pod_ready.go:93] pod "kube-scheduler-kindnet-457876" in "kube-system" namespace has status "Ready":"True"
	I1028 18:50:59.491637   74640 pod_ready.go:82] duration metric: took 398.60402ms for pod "kube-scheduler-kindnet-457876" in "kube-system" namespace to be "Ready" ...
	I1028 18:50:59.491648   74640 pod_ready.go:39] duration metric: took 2.985761532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:50:59.491664   74640 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:50:59.491708   74640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:50:59.513822   74640 api_server.go:72] duration metric: took 15.898468884s to wait for apiserver process to appear ...
	I1028 18:50:59.513852   74640 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:50:59.513875   74640 api_server.go:253] Checking apiserver healthz at https://192.168.61.41:8443/healthz ...
	I1028 18:50:59.519612   74640 api_server.go:279] https://192.168.61.41:8443/healthz returned 200:
	ok
	I1028 18:50:59.520963   74640 api_server.go:141] control plane version: v1.31.2
	I1028 18:50:59.520989   74640 api_server.go:131] duration metric: took 7.129636ms to wait for apiserver health ...
	I1028 18:50:59.521000   74640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:50:59.695082   74640 system_pods.go:59] 8 kube-system pods found
	I1028 18:50:59.695115   74640 system_pods.go:61] "coredns-7c65d6cfc9-mz2vg" [d297c0a9-9886-47d2-8856-d7c76261340e] Running
	I1028 18:50:59.695123   74640 system_pods.go:61] "etcd-kindnet-457876" [6d481e5c-ffbc-4572-a507-43d05f272530] Running
	I1028 18:50:59.695129   74640 system_pods.go:61] "kindnet-c4wgd" [dbd03dfa-95eb-45b9-8bb2-5b7ca5df9285] Running
	I1028 18:50:59.695134   74640 system_pods.go:61] "kube-apiserver-kindnet-457876" [e9f774b5-8007-4e98-820d-39d32b8ccc2a] Running
	I1028 18:50:59.695140   74640 system_pods.go:61] "kube-controller-manager-kindnet-457876" [fb8c130d-32c8-4817-a03d-f1794705fcde] Running
	I1028 18:50:59.695145   74640 system_pods.go:61] "kube-proxy-8tn8h" [fdd11ef5-ffd4-4d7a-b9f3-41acaaa6807b] Running
	I1028 18:50:59.695153   74640 system_pods.go:61] "kube-scheduler-kindnet-457876" [68a1bbd7-8325-42ee-99ef-0db4b2783e92] Running
	I1028 18:50:59.695158   74640 system_pods.go:61] "storage-provisioner" [ca05c899-f0c3-46c5-ae84-9c4117bd45cd] Running
	I1028 18:50:59.695166   74640 system_pods.go:74] duration metric: took 174.159462ms to wait for pod list to return data ...
	I1028 18:50:59.695178   74640 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:50:59.893739   74640 default_sa.go:45] found service account: "default"
	I1028 18:50:59.893769   74640 default_sa.go:55] duration metric: took 198.578579ms for default service account to be created ...
	I1028 18:50:59.893780   74640 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:51:00.095087   74640 system_pods.go:86] 8 kube-system pods found
	I1028 18:51:00.095121   74640 system_pods.go:89] "coredns-7c65d6cfc9-mz2vg" [d297c0a9-9886-47d2-8856-d7c76261340e] Running
	I1028 18:51:00.095130   74640 system_pods.go:89] "etcd-kindnet-457876" [6d481e5c-ffbc-4572-a507-43d05f272530] Running
	I1028 18:51:00.095136   74640 system_pods.go:89] "kindnet-c4wgd" [dbd03dfa-95eb-45b9-8bb2-5b7ca5df9285] Running
	I1028 18:51:00.095141   74640 system_pods.go:89] "kube-apiserver-kindnet-457876" [e9f774b5-8007-4e98-820d-39d32b8ccc2a] Running
	I1028 18:51:00.095147   74640 system_pods.go:89] "kube-controller-manager-kindnet-457876" [fb8c130d-32c8-4817-a03d-f1794705fcde] Running
	I1028 18:51:00.095153   74640 system_pods.go:89] "kube-proxy-8tn8h" [fdd11ef5-ffd4-4d7a-b9f3-41acaaa6807b] Running
	I1028 18:51:00.095158   74640 system_pods.go:89] "kube-scheduler-kindnet-457876" [68a1bbd7-8325-42ee-99ef-0db4b2783e92] Running
	I1028 18:51:00.095163   74640 system_pods.go:89] "storage-provisioner" [ca05c899-f0c3-46c5-ae84-9c4117bd45cd] Running
	I1028 18:51:00.095172   74640 system_pods.go:126] duration metric: took 201.38412ms to wait for k8s-apps to be running ...
	I1028 18:51:00.095180   74640 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:51:00.095234   74640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:51:00.113339   74640 system_svc.go:56] duration metric: took 18.147048ms WaitForService to wait for kubelet
	I1028 18:51:00.113378   74640 kubeadm.go:582] duration metric: took 16.498030681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:51:00.113403   74640 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:51:00.295000   74640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:51:00.295039   74640 node_conditions.go:123] node cpu capacity is 2
	I1028 18:51:00.295052   74640 node_conditions.go:105] duration metric: took 181.643207ms to run NodePressure ...
	I1028 18:51:00.295067   74640 start.go:241] waiting for startup goroutines ...
	I1028 18:51:00.295076   74640 start.go:246] waiting for cluster config update ...
	I1028 18:51:00.295089   74640 start.go:255] writing updated cluster config ...
	I1028 18:51:00.295390   74640 ssh_runner.go:195] Run: rm -f paused
	I1028 18:51:00.361616   74640 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:51:00.363313   74640 out.go:177] * Done! kubectl is now configured to use "kindnet-457876" cluster and "default" namespace by default
	I1028 18:51:04.225208   75305 kubeadm.go:310] [api-check] The API server is healthy after 5.503125363s
	I1028 18:51:04.243635   75305 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:51:04.264570   75305 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:51:04.300569   75305 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:51:04.300799   75305 kubeadm.go:310] [mark-control-plane] Marking the node flannel-457876 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:51:04.314578   75305 kubeadm.go:310] [bootstrap-token] Using token: hnmh4m.wp497yp5iu2pimkv
	I1028 18:51:04.316004   75305 out.go:235]   - Configuring RBAC rules ...
	I1028 18:51:04.316161   75305 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:51:04.323971   75305 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:51:04.334024   75305 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:51:04.340383   75305 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:51:04.346518   75305 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:51:04.351538   75305 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:51:04.634775   75305 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:51:05.063102   75305 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:51:05.634391   75305 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:51:05.634418   75305 kubeadm.go:310] 
	I1028 18:51:05.634515   75305 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:51:05.634539   75305 kubeadm.go:310] 
	I1028 18:51:05.634669   75305 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:51:05.634682   75305 kubeadm.go:310] 
	I1028 18:51:05.634713   75305 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:51:05.634804   75305 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:51:05.634885   75305 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:51:05.634898   75305 kubeadm.go:310] 
	I1028 18:51:05.634945   75305 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:51:05.634952   75305 kubeadm.go:310] 
	I1028 18:51:05.635023   75305 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:51:05.635033   75305 kubeadm.go:310] 
	I1028 18:51:05.635088   75305 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:51:05.635159   75305 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:51:05.635218   75305 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:51:05.635224   75305 kubeadm.go:310] 
	I1028 18:51:05.635312   75305 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:51:05.635435   75305 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:51:05.635462   75305 kubeadm.go:310] 
	I1028 18:51:05.635585   75305 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hnmh4m.wp497yp5iu2pimkv \
	I1028 18:51:05.635721   75305 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:51:05.635752   75305 kubeadm.go:310] 	--control-plane 
	I1028 18:51:05.635759   75305 kubeadm.go:310] 
	I1028 18:51:05.635889   75305 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:51:05.635899   75305 kubeadm.go:310] 
	I1028 18:51:05.635996   75305 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hnmh4m.wp497yp5iu2pimkv \
	I1028 18:51:05.636125   75305 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:51:05.636578   75305 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:51:05.636614   75305 cni.go:84] Creating CNI manager for "flannel"
	I1028 18:51:05.638123   75305 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1028 18:51:05.639282   75305 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 18:51:05.645607   75305 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 18:51:05.645621   75305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1028 18:51:05.664839   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 18:51:06.063628   75305 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:51:06.063696   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:06.063696   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-457876 minikube.k8s.io/updated_at=2024_10_28T18_51_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=flannel-457876 minikube.k8s.io/primary=true
	I1028 18:51:06.232567   75305 ops.go:34] apiserver oom_adj: -16
	I1028 18:51:06.232674   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:06.733653   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:07.233724   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:07.733693   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:08.233638   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:08.733757   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:09.233318   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:09.733216   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:10.233771   75305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:51:10.339558   75305 kubeadm.go:1113] duration metric: took 4.275928109s to wait for elevateKubeSystemPrivileges
	I1028 18:51:10.339593   75305 kubeadm.go:394] duration metric: took 15.798721919s to StartCluster
	I1028 18:51:10.339609   75305 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:51:10.339673   75305 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:51:10.341731   75305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:51:10.341926   75305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 18:51:10.341940   75305 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:51:10.341925   75305 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:51:10.342030   75305 addons.go:69] Setting storage-provisioner=true in profile "flannel-457876"
	I1028 18:51:10.342031   75305 addons.go:69] Setting default-storageclass=true in profile "flannel-457876"
	I1028 18:51:10.342049   75305 addons.go:234] Setting addon storage-provisioner=true in "flannel-457876"
	I1028 18:51:10.342078   75305 host.go:66] Checking if "flannel-457876" exists ...
	I1028 18:51:10.342147   75305 config.go:182] Loaded profile config "flannel-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:51:10.342050   75305 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-457876"
	I1028 18:51:10.342483   75305 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:51:10.342518   75305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:51:10.342570   75305 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:51:10.342605   75305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:51:10.344580   75305 out.go:177] * Verifying Kubernetes components...
	I1028 18:51:10.345845   75305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:51:10.359648   75305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1028 18:51:10.360051   75305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I1028 18:51:10.360053   75305 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:51:10.360415   75305 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:51:10.360740   75305 main.go:141] libmachine: Using API Version  1
	I1028 18:51:10.360758   75305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:51:10.360872   75305 main.go:141] libmachine: Using API Version  1
	I1028 18:51:10.360895   75305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:51:10.361111   75305 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:51:10.361294   75305 main.go:141] libmachine: (flannel-457876) Calling .GetState
	I1028 18:51:10.361365   75305 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:51:10.361780   75305 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:51:10.361807   75305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:51:10.364750   75305 addons.go:234] Setting addon default-storageclass=true in "flannel-457876"
	I1028 18:51:10.364789   75305 host.go:66] Checking if "flannel-457876" exists ...
	I1028 18:51:10.365071   75305 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:51:10.365105   75305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:51:10.379850   75305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I1028 18:51:10.380244   75305 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:51:10.380794   75305 main.go:141] libmachine: Using API Version  1
	I1028 18:51:10.380811   75305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:51:10.381383   75305 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:51:10.381556   75305 main.go:141] libmachine: (flannel-457876) Calling .GetState
	I1028 18:51:10.381587   75305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43435
	I1028 18:51:10.381971   75305 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:51:10.382645   75305 main.go:141] libmachine: Using API Version  1
	I1028 18:51:10.382664   75305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:51:10.383137   75305 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:51:10.383375   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:51:10.383698   75305 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:51:10.383742   75305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:51:10.385271   75305 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:51:10.386701   75305 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:51:10.386713   75305 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:51:10.386725   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:51:10.389747   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:51:10.390263   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:51:10.390285   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:51:10.390566   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:51:10.390751   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:51:10.390896   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:51:10.391045   75305 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa Username:docker}
	I1028 18:51:10.399172   75305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I1028 18:51:10.399574   75305 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:51:10.399969   75305 main.go:141] libmachine: Using API Version  1
	I1028 18:51:10.399990   75305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:51:10.400364   75305 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:51:10.400582   75305 main.go:141] libmachine: (flannel-457876) Calling .GetState
	I1028 18:51:10.401977   75305 main.go:141] libmachine: (flannel-457876) Calling .DriverName
	I1028 18:51:10.402177   75305 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:51:10.402195   75305 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:51:10.402218   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHHostname
	I1028 18:51:10.404251   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:51:10.404654   75305 main.go:141] libmachine: (flannel-457876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:59:d4", ip: ""} in network mk-flannel-457876: {Iface:virbr4 ExpiryTime:2024-10-28 19:50:37 +0000 UTC Type:0 Mac:52:54:00:57:59:d4 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:flannel-457876 Clientid:01:52:54:00:57:59:d4}
	I1028 18:51:10.404678   75305 main.go:141] libmachine: (flannel-457876) DBG | domain flannel-457876 has defined IP address 192.168.72.134 and MAC address 52:54:00:57:59:d4 in network mk-flannel-457876
	I1028 18:51:10.404844   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHPort
	I1028 18:51:10.404999   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHKeyPath
	I1028 18:51:10.405159   75305 main.go:141] libmachine: (flannel-457876) Calling .GetSSHUsername
	I1028 18:51:10.405311   75305 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/flannel-457876/id_rsa Username:docker}
	I1028 18:51:10.584990   75305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:51:10.585128   75305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 18:51:10.624510   75305 node_ready.go:35] waiting up to 15m0s for node "flannel-457876" to be "Ready" ...
	I1028 18:51:10.688649   75305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:51:10.718643   75305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:51:11.138900   75305 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1028 18:51:11.397072   75305 main.go:141] libmachine: Making call to close driver server
	I1028 18:51:11.397102   75305 main.go:141] libmachine: (flannel-457876) Calling .Close
	I1028 18:51:11.397081   75305 main.go:141] libmachine: Making call to close driver server
	I1028 18:51:11.397170   75305 main.go:141] libmachine: (flannel-457876) Calling .Close
	I1028 18:51:11.397383   75305 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:51:11.397520   75305 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:51:11.397533   75305 main.go:141] libmachine: Making call to close driver server
	I1028 18:51:11.397541   75305 main.go:141] libmachine: (flannel-457876) Calling .Close
	I1028 18:51:11.397485   75305 main.go:141] libmachine: (flannel-457876) DBG | Closing plugin on server side
	I1028 18:51:11.397500   75305 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:51:11.397504   75305 main.go:141] libmachine: (flannel-457876) DBG | Closing plugin on server side
	I1028 18:51:11.397746   75305 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:51:11.397754   75305 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:51:11.398224   75305 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:51:11.398248   75305 main.go:141] libmachine: Making call to close driver server
	I1028 18:51:11.398264   75305 main.go:141] libmachine: (flannel-457876) Calling .Close
	I1028 18:51:11.398511   75305 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:51:11.398522   75305 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:51:11.405359   75305 main.go:141] libmachine: Making call to close driver server
	I1028 18:51:11.405380   75305 main.go:141] libmachine: (flannel-457876) Calling .Close
	I1028 18:51:11.405680   75305 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:51:11.405722   75305 main.go:141] libmachine: (flannel-457876) DBG | Closing plugin on server side
	I1028 18:51:11.405733   75305 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:51:11.408091   75305 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 18:51:11.409500   75305 addons.go:510] duration metric: took 1.067556157s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 18:51:11.642468   75305 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-457876" context rescaled to 1 replicas
	I1028 18:51:12.628314   75305 node_ready.go:53] node "flannel-457876" has status "Ready":"False"
	I1028 18:51:14.628352   75305 node_ready.go:53] node "flannel-457876" has status "Ready":"False"
	I1028 18:51:17.129253   75305 node_ready.go:53] node "flannel-457876" has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.200619878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141482200598375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=584e33a4-6502-46c5-ac94-f2b529fef295 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.201312887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84902c3d-61e1-4698-bbad-0bdb67b756b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.201372538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84902c3d-61e1-4698-bbad-0bdb67b756b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.201574283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84902c3d-61e1-4698-bbad-0bdb67b756b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.254818072Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3d7089a-4ac6-42d9-b86d-8553fa436c84 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.254894073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3d7089a-4ac6-42d9-b86d-8553fa436c84 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.257051535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99341536-6685-40e1-9f7a-f77aed7bdb8c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.257517833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141482257480883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99341536-6685-40e1-9f7a-f77aed7bdb8c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.258345906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99e51db6-dd5a-4adf-b2e6-ea8c30feb40b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.258395521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99e51db6-dd5a-4adf-b2e6-ea8c30feb40b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.258612385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99e51db6-dd5a-4adf-b2e6-ea8c30feb40b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.311678330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d8ba9ec-09ea-43ec-ad2a-6af61e712231 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.311784940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d8ba9ec-09ea-43ec-ad2a-6af61e712231 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.313313270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=907da818-efde-46c8-8397-ed8dd84337bc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.313875508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141482313846078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=907da818-efde-46c8-8397-ed8dd84337bc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.314774131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d0958bc-733b-4863-a987-6e3343089c57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.314847691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d0958bc-733b-4863-a987-6e3343089c57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.315273365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d0958bc-733b-4863-a987-6e3343089c57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.361717772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aabb8095-84b3-4798-9b05-51476affff16 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.361850300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aabb8095-84b3-4798-9b05-51476affff16 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.364069319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=073db2f7-623b-4228-8b8f-356bfc99af94 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.364633805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141482364605197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=073db2f7-623b-4228-8b8f-356bfc99af94 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.365469537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b65a2c10-6fe4-4987-80d5-f00a65e3e7d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.365572404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b65a2c10-6fe4-4987-80d5-f00a65e3e7d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:51:22 default-k8s-diff-port-692033 crio[715]: time="2024-10-28 18:51:22.365840651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48,PodSandboxId:42c2a34c0cb4c3d96eb7263504c23df441235e2dcd2d19de8379729b532d5bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447515393065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rhvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41008126-560b-4c8e-b110-4a180c56ab0b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7,PodSandboxId:9bda050b81c88a952e3933472f9327ee632d912e6778882b35eeb5c6c33e0556,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140447478562315,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1021f60d-1944-4f55-a4d9-1a8f8a3ae0df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733,PodSandboxId:f7365a572ebb0df2e1f38083209f60ee58896297169bb75077f80cfce9358ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140447406971991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-25sf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c4e4eda2-a141-4111-b71b-ae8efd6e250f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5,PodSandboxId:bf45e668a8b5d763dbd0498ce68937dedb9847e9ae5c10c41986ac263d9d469c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730140446638022185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b56jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c73611b-f055-4fa4-9665-f73469c6e236,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb,PodSandboxId:b3d5706a965ffd13bc945401e22a7705a648a7c833bee078f358e955c42d2226,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140435947096538,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0337e77ead053b59bf81cd3a5250b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b,PodSandboxId:dcb9b277b485f9acd24ef909e2818eb9073838cce1fa76e7aa211896a993868c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140435957483019,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3461175f27b54099cc6ab4d60506c1,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862,PodSandboxId:7a508311845798dae2ed5fc357bbab6a9500898c0344fb201c8b63fd9f441dd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140435914337273,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6,PodSandboxId:cb0bb21858003bb7acd01368e8044d37ed10b5fa5fd24db1f665f370dba3797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140435891309695,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc066857ba4fc3eddf8d5c21ba256fad,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df,PodSandboxId:42c267a91b0aea04869b2371f1dfe64c544ecc8728f5e86f2703fd9af4e657ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140149060522382,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-692033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256ca7112cfabbfe46c479e764319c34,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b65a2c10-6fe4-4987-80d5-f00a65e3e7d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60c0aac9932e8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   42c2a34c0cb4c       coredns-7c65d6cfc9-rhvmm
	405dd9d867300       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   9bda050b81c88       storage-provisioner
	569ca30401d69       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   f7365a572ebb0       coredns-7c65d6cfc9-25sf7
	150d2b4f66144       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   17 minutes ago      Running             kube-proxy                0                   bf45e668a8b5d       kube-proxy-b56jx
	044fcc47181f7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   17 minutes ago      Running             kube-scheduler            2                   dcb9b277b485f       kube-scheduler-default-k8s-diff-port-692033
	7fc4b09c10022       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   b3d5706a965ff       etcd-default-k8s-diff-port-692033
	78fb2afab1c5b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   17 minutes ago      Running             kube-apiserver            2                   7a50831184579       kube-apiserver-default-k8s-diff-port-692033
	f82fa01be0383       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   17 minutes ago      Running             kube-controller-manager   2                   cb0bb21858003       kube-controller-manager-default-k8s-diff-port-692033
	7e512fff51a6b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 minutes ago      Exited              kube-apiserver            1                   42c267a91b0ae       kube-apiserver-default-k8s-diff-port-692033
	
	
	==> coredns [569ca30401d69e9aac1500f885824a3a2a17511f1738b19b95cabb1fa0b17733] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [60c0aac9932e8a61473f6f47fdf175bb9337c37c7b7adf98755bf30ae2337c48] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-692033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-692033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=default-k8s-diff-port-692033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:33:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-692033
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:51:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:49:30 +0000   Mon, 28 Oct 2024 18:33:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:49:30 +0000   Mon, 28 Oct 2024 18:33:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:49:30 +0000   Mon, 28 Oct 2024 18:33:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:49:30 +0000   Mon, 28 Oct 2024 18:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    default-k8s-diff-port-692033
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df22995e8bda4630892d9a7d579ec690
	  System UUID:                df22995e-8bda-4630-892d-9a7d579ec690
	  Boot ID:                    d9a76dc0-ef12-43e1-8b0b-0c10f8a07301
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-25sf7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-rhvmm                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-692033                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-692033             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-692033    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-b56jx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-692033             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-8vz62                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-692033 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-692033 event: Registered Node default-k8s-diff-port-692033 in Controller
	
	
	==> dmesg <==
	[  +0.055872] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.269265] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.563578] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.379279] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 18:29] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.055965] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055826] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.194006] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.130454] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.305093] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.232771] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +2.275710] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.059753] kauditd_printk_skb: 158 callbacks suppressed
	[  +4.998100] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.328196] kauditd_printk_skb: 54 callbacks suppressed
	[Oct28 18:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.332543] systemd-fstab-generator[2608]: Ignoring "noauto" option for root device
	[  +4.560979] kauditd_printk_skb: 56 callbacks suppressed
	[Oct28 18:34] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +4.879427] systemd-fstab-generator[3041]: Ignoring "noauto" option for root device
	[  +0.096804] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.297258] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [7fc4b09c100222ec3807c13ca415887d4ff4480a00fd9dee48140e31dddeb5cb] <==
	{"level":"warn","ts":"2024-10-28T18:49:35.811231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.609154ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12933936620829629401 > lease_revoke:<id:337e92d467f75b7e>","response":"size:28"}
	{"level":"warn","ts":"2024-10-28T18:50:00.427223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.692181ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12933936620829629544 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.215\" mod_revision:1247 > success:<request_put:<key:\"/registry/masterleases/192.168.39.215\" value_size:67 lease:3710564583974853734 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.215\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T18:50:00.427759Z","caller":"traceutil/trace.go:171","msg":"trace[2091481544] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"264.457453ms","start":"2024-10-28T18:50:00.163266Z","end":"2024-10-28T18:50:00.427723Z","steps":["trace[2091481544] 'process raft request'  (duration: 129.131245ms)","trace[2091481544] 'compare'  (duration: 134.517018ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T18:50:01.344276Z","caller":"traceutil/trace.go:171","msg":"trace[1292784888] transaction","detail":"{read_only:false; response_revision:1257; number_of_response:1; }","duration":"244.711521ms","start":"2024-10-28T18:50:01.099547Z","end":"2024-10-28T18:50:01.344259Z","steps":["trace[1292784888] 'process raft request'  (duration: 244.528827ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T18:50:29.250885Z","caller":"traceutil/trace.go:171","msg":"trace[349362044] linearizableReadLoop","detail":"{readStateIndex:1491; appliedIndex:1490; }","duration":"445.469268ms","start":"2024-10-28T18:50:28.805391Z","end":"2024-10-28T18:50:29.250860Z","steps":["trace[349362044] 'read index received'  (duration: 445.300315ms)","trace[349362044] 'applied index is now lower than readState.Index'  (duration: 168.455µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T18:50:29.251087Z","caller":"traceutil/trace.go:171","msg":"trace[543925008] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"582.793912ms","start":"2024-10-28T18:50:28.668260Z","end":"2024-10-28T18:50:29.251054Z","steps":["trace[543925008] 'process raft request'  (duration: 582.483313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:50:29.251198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:50:28.668244Z","time spent":"582.867646ms","remote":"127.0.0.1:42600","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-h25n5xq3rcct5j34vjtuytwhha\" mod_revision:1269 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-h25n5xq3rcct5j34vjtuytwhha\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-h25n5xq3rcct5j34vjtuytwhha\" > >"}
	{"level":"warn","ts":"2024-10-28T18:50:29.251303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"445.910217ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:50:29.251357Z","caller":"traceutil/trace.go:171","msg":"trace[550896446] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1278; }","duration":"445.947772ms","start":"2024-10-28T18:50:28.805385Z","end":"2024-10-28T18:50:29.251332Z","steps":["trace[550896446] 'agreement among raft nodes before linearized reading'  (duration: 445.900037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:50:29.251535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.037173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:50:29.251570Z","caller":"traceutil/trace.go:171","msg":"trace[1514700044] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1278; }","duration":"421.074625ms","start":"2024-10-28T18:50:28.830490Z","end":"2024-10-28T18:50:29.251565Z","steps":["trace[1514700044] 'agreement among raft nodes before linearized reading'  (duration: 421.024315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:50:29.251598Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:50:28.830447Z","time spent":"421.146491ms","remote":"127.0.0.1:42524","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-28T18:50:29.650597Z","caller":"traceutil/trace.go:171","msg":"trace[1739492323] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"116.979216ms","start":"2024-10-28T18:50:29.533603Z","end":"2024-10-28T18:50:29.650582Z","steps":["trace[1739492323] 'process raft request'  (duration: 116.819474ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:50:29.973315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.493664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:50:29.973542Z","caller":"traceutil/trace.go:171","msg":"trace[2133185967] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1279; }","duration":"144.706084ms","start":"2024-10-28T18:50:29.828800Z","end":"2024-10-28T18:50:29.973506Z","steps":["trace[2133185967] 'range keys from in-memory index tree'  (duration: 144.437347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:50:29.973754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.166976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:50:29.973867Z","caller":"traceutil/trace.go:171","msg":"trace[630517947] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1279; }","duration":"123.286022ms","start":"2024-10-28T18:50:29.850574Z","end":"2024-10-28T18:50:29.973860Z","steps":["trace[630517947] 'count revisions from in-memory index tree'  (duration: 123.11555ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:50:29.973763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.189491ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:50:29.974123Z","caller":"traceutil/trace.go:171","msg":"trace[721698883] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1279; }","duration":"168.562504ms","start":"2024-10-28T18:50:29.805547Z","end":"2024-10-28T18:50:29.974110Z","steps":["trace[721698883] 'range keys from in-memory index tree'  (duration: 167.456892ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T18:50:30.286459Z","caller":"traceutil/trace.go:171","msg":"trace[1658434493] transaction","detail":"{read_only:false; response_revision:1280; number_of_response:1; }","duration":"117.854698ms","start":"2024-10-28T18:50:30.168589Z","end":"2024-10-28T18:50:30.286444Z","steps":["trace[1658434493] 'process raft request'  (duration: 64.651006ms)","trace[1658434493] 'compare'  (duration: 52.967802ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T18:50:55.781967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.20587ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12933936620829629879 > lease_revoke:<id:337e92d467f75d60>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T18:50:55.909535Z","caller":"traceutil/trace.go:171","msg":"trace[616650605] linearizableReadLoop","detail":"{readStateIndex:1520; appliedIndex:1519; }","duration":"104.306183ms","start":"2024-10-28T18:50:55.805216Z","end":"2024-10-28T18:50:55.909522Z","steps":["trace[616650605] 'read index received'  (duration: 104.167499ms)","trace[616650605] 'applied index is now lower than readState.Index'  (duration: 138.046µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T18:50:55.909886Z","caller":"traceutil/trace.go:171","msg":"trace[325809760] transaction","detail":"{read_only:false; response_revision:1301; number_of_response:1; }","duration":"118.515405ms","start":"2024-10-28T18:50:55.791350Z","end":"2024-10-28T18:50:55.909865Z","steps":["trace[325809760] 'process raft request'  (duration: 118.076362ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:50:55.909999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.797209ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:50:55.910966Z","caller":"traceutil/trace.go:171","msg":"trace[299386958] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1301; }","duration":"105.724465ms","start":"2024-10-28T18:50:55.805182Z","end":"2024-10-28T18:50:55.910906Z","steps":["trace[299386958] 'agreement among raft nodes before linearized reading'  (duration: 104.731166ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:51:22 up 22 min,  0 users,  load average: 0.05, 0.16, 0.16
	Linux default-k8s-diff-port-692033 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [78fb2afab1c5b10f26c55b99d50daaf8b81f3682240f3b6648ca6dd3af84f862] <==
	I1028 18:46:59.470288       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:46:59.471368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:48:58.468633       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:48:58.469398       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 18:48:59.471528       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:48:59.471543       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:48:59.471739       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 18:48:59.471814       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 18:48:59.472890       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:48:59.472990       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:49:59.473708       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:49:59.473791       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:49:59.474145       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1028 18:49:59.474141       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 18:49:59.475451       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:49:59.475491       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [7e512fff51a6b85e0065b314f5e2178451d6c670f1eb177ec46ebab5b50eb6df] <==
	W1028 18:33:49.120170       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.120177       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.158146       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.171791       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.177249       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.204467       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.208739       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.233468       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.260466       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.337849       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.347689       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.347785       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.356351       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.370185       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.409261       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.422978       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.601743       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.627147       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.627545       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.658781       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.768606       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.838876       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:49.962703       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:50.146043       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:33:50.179239       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f82fa01be038374eb4a370e30b6725f6996477f71d605c2303975bba0432d3e6] <==
	E1028 18:46:05.546819       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:46:06.053756       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:46:35.553666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:46:36.061016       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:47:05.560840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:47:06.068725       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:47:35.567678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:47:36.076868       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:48:05.573774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:48:06.085294       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:48:35.579791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:48:36.096812       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:49:05.587107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:49:06.105335       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:49:30.464352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-692033"
	E1028 18:49:35.594789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:49:36.120120       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:50:05.602208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:50:06.130870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:50:20.112040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="255.824µs"
	I1028 18:50:35.116763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="94.863µs"
	E1028 18:50:35.609409       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:50:36.138687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:51:05.618474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:51:06.147974       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [150d2b4f661443d60a3810f42ce4adec688f64d727b169965683f84f80dbd5a5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:34:07.356840       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:34:07.374545       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E1028 18:34:07.374637       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:34:07.620280       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:34:07.620337       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:34:07.620369       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:34:07.655596       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:34:07.655811       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:34:07.655823       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:34:07.659393       1 config.go:199] "Starting service config controller"
	I1028 18:34:07.659406       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:34:07.659420       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:34:07.659423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:34:07.659809       1 config.go:328] "Starting node config controller"
	I1028 18:34:07.659841       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:34:07.760606       1 shared_informer.go:320] Caches are synced for node config
	I1028 18:34:07.760652       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:34:07.760662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [044fcc47181f7ab6523713cc71e5644081ae91f22af7315e8a6607d8c09d2d3b] <==
	W1028 18:33:58.479993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:33:58.480006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 18:33:58.480076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 18:33:58.480144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:33:58.480210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.479508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:58.480484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:33:58.481048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:58.480341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 18:33:58.481243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.335172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 18:33:59.335224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.358647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 18:33:59.358702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.390271       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 18:33:59.390403       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 18:33:59.615126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:33:59.615160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:33:59.627660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 18:33:59.627787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 18:34:01.265099       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 18:50:11 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:11.422886    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141411422271411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:20 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:20.096770    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:50:21 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:21.425333    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141421424790790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:21 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:21.425680    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141421424790790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:31 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:31.428179    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141431427426637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:31 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:31.428298    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141431427426637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:35 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:35.098488    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:50:41 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:41.431068    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141441430756306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:41 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:41.431096    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141441430756306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:50 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:50.098203    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:50:51 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:51.433527    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141451432637693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:50:51 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:50:51.434162    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141451432637693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:51:01 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:01.135084    2937 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 18:51:01 default-k8s-diff-port-692033 kubelet[2937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 18:51:01 default-k8s-diff-port-692033 kubelet[2937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 18:51:01 default-k8s-diff-port-692033 kubelet[2937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 18:51:01 default-k8s-diff-port-692033 kubelet[2937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 18:51:01 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:01.436331    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141461435839502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:51:01 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:01.436442    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141461435839502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:51:04 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:04.097906    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:51:11 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:11.438262    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141471437673252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:51:11 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:11.438537    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141471437673252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:51:16 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:16.097907    2937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vz62" podUID="b6498143-8e21-4f11-9d29-e20964e74203"
	Oct 28 18:51:21 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:21.442044    2937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141481441344488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:51:21 default-k8s-diff-port-692033 kubelet[2937]: E1028 18:51:21.442071    2937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141481441344488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [405dd9d867300e54b3427c3a694166d6a58349b0f59123418d2e0ccea9483ae7] <==
	I1028 18:34:07.700230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 18:34:07.715488       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 18:34:07.715714       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 18:34:07.723879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 18:34:07.724170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-692033_25241bb0-fdda-4304-ae05-b56a6882e94d!
	I1028 18:34:07.724271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3cc5af26-302e-492f-881a-248b50a59ab1", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-692033_25241bb0-fdda-4304-ae05-b56a6882e94d became leader
	I1028 18:34:07.825397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-692033_25241bb0-fdda-4304-ae05-b56a6882e94d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8vz62
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 describe pod metrics-server-6867b74b74-8vz62
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-692033 describe pod metrics-server-6867b74b74-8vz62: exit status 1 (67.233392ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8vz62" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-692033 describe pod metrics-server-6867b74b74-8vz62: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (484.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (341.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021370 -n embed-certs-021370
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 18:49:22.646215525 +0000 UTC m=+6202.104259844
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-021370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-021370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.487µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-021370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-021370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-021370 logs -n 25: (1.161408889s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:48 UTC |
	| start   | -p newest-cni-724173 --memory=2200 --alsologtostderr   | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:48 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-724173             | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-724173                                   | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:48 UTC | 28 Oct 24 18:49 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-724173                  | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:49 UTC | 28 Oct 24 18:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-724173 --memory=2200 --alsologtostderr   | newest-cni-724173            | jenkins | v1.34.0 | 28 Oct 24 18:49 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:49 UTC | 28 Oct 24 18:49 UTC |
	| start   | -p auto-457876 --memory=3072                           | auto-457876                  | jenkins | v1.34.0 | 28 Oct 24 18:49 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:49:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:49:19.348409   74377 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:49:19.348688   74377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:49:19.348698   74377 out.go:358] Setting ErrFile to fd 2...
	I1028 18:49:19.348702   74377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:49:19.348947   74377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:49:19.349708   74377 out.go:352] Setting JSON to false
	I1028 18:49:19.350761   74377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9102,"bootTime":1730132257,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:49:19.350848   74377 start.go:139] virtualization: kvm guest
	I1028 18:49:19.353040   74377 out.go:177] * [auto-457876] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:49:19.354239   74377 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:49:19.354299   74377 notify.go:220] Checking for updates...
	I1028 18:49:19.356587   74377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:49:19.357975   74377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:49:19.359277   74377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:49:19.360480   74377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:49:19.361707   74377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:49:19.363363   74377 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:19.363522   74377 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:19.363688   74377 config.go:182] Loaded profile config "newest-cni-724173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:49:19.363795   74377 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:49:19.399672   74377 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 18:49:19.400868   74377 start.go:297] selected driver: kvm2
	I1028 18:49:19.400895   74377 start.go:901] validating driver "kvm2" against <nil>
	I1028 18:49:19.400909   74377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:49:19.401850   74377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:49:19.401954   74377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:49:19.417245   74377 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:49:19.417302   74377 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 18:49:19.417573   74377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:49:19.417612   74377 cni.go:84] Creating CNI manager for ""
	I1028 18:49:19.417669   74377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:49:19.417685   74377 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 18:49:19.417762   74377 start.go:340] cluster config:
	{Name:auto-457876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-457876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:49:19.417893   74377 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:49:19.419510   74377 out.go:177] * Starting "auto-457876" primary control-plane node in "auto-457876" cluster
	I1028 18:49:18.516204   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:18.516693   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:18.516722   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:18.516671   74089 retry.go:31] will retry after 3.138353962s: waiting for machine to come up
	I1028 18:49:21.657508   74054 main.go:141] libmachine: (newest-cni-724173) DBG | domain newest-cni-724173 has defined MAC address 52:54:00:55:19:fb in network mk-newest-cni-724173
	I1028 18:49:21.657871   74054 main.go:141] libmachine: (newest-cni-724173) DBG | unable to find current IP address of domain newest-cni-724173 in network mk-newest-cni-724173
	I1028 18:49:21.657897   74054 main.go:141] libmachine: (newest-cni-724173) DBG | I1028 18:49:21.657826   74089 retry.go:31] will retry after 4.52057878s: waiting for machine to come up
	
	
	==> CRI-O <==
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.216948544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141363216928341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c71f9cb-43ec-4e6e-b55a-2ff1662b1a85 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.217828666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=571838d5-4d0a-4e79-94c6-dc79a9c5693e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.217877573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=571838d5-4d0a-4e79-94c6-dc79a9c5693e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.218087228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=571838d5-4d0a-4e79-94c6-dc79a9c5693e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.254393809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dd58a73-b773-4593-8dbc-fc16ac428d9d name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.254456754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dd58a73-b773-4593-8dbc-fc16ac428d9d name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.255490199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8940296-96db-436f-a0d2-d24c554bf25e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.255945893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141363255922813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8940296-96db-436f-a0d2-d24c554bf25e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.256380888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22a8b211-5b44-4e34-9b78-2446d8f4c856 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.256451637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22a8b211-5b44-4e34-9b78-2446d8f4c856 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.256707465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22a8b211-5b44-4e34-9b78-2446d8f4c856 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.293015682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d4d0832-3ce6-4aa8-9566-fed52cebcf10 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.293101075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d4d0832-3ce6-4aa8-9566-fed52cebcf10 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.294253332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a513fb90-0299-4b28-8159-a5595f30e1f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.294703762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141363294681949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a513fb90-0299-4b28-8159-a5595f30e1f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.295211406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b71dbce0-29ec-44ae-bbb4-0ef3a7d4b6da name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.295282142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b71dbce0-29ec-44ae-bbb4-0ef3a7d4b6da name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.295461333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b71dbce0-29ec-44ae-bbb4-0ef3a7d4b6da name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.327636421Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4fe07157-4801-402f-801a-f85eb046dbf5 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.327704367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4fe07157-4801-402f-801a-f85eb046dbf5 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.328850067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e3e7c5f-0e1e-4579-9570-0bf59efcc8de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.329492861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141363329469761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e3e7c5f-0e1e-4579-9570-0bf59efcc8de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.330117285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86ab504c-ee41-437c-9a48-7b9738714dac name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.330185120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86ab504c-ee41-437c-9a48-7b9738714dac name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:49:23 embed-certs-021370 crio[708]: time="2024-10-28 18:49:23.330392443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92,PodSandboxId:413e67bcd3b89eb551442c20b10a9678cb8fbe235e1268a7a4eec0582b1e3386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730140469082317857,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0ac8ad-5ba0-47a0-8613-7a6fba893f06,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b,PodSandboxId:7fbafc94d70e0c892dfdb7a0815899434f87a94085f8a2771f34dc497bc4afb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468924234529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qw5gl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605aa4fe-2ed4-4246-a087-614d56c64c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c,PodSandboxId:787d8ca532edb9c134b1999d0bb83d399cad97016f9e486cbae51e2204055189,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730140468584085987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d5pk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
37c2887-86c5-485e-9548-49f0cb407435,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd,PodSandboxId:b0f9161c9eb2934f3b7a560454a5b01fb2178ce1fe3a1afde65b4324a4b8f4f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730140468343352399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nrr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cffbf7-943e-4853-9197-d4275a479d5d,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41,PodSandboxId:46f3ff135b802e81544367d7ae811e8b737bb4504edaa30d69347e66409b72eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730140457067206266,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6652b43f718e4589ac2f1db67f538ffd,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f,PodSandboxId:977cc130e70848738b095bc7575e0c00757e0771483ee0cd6d36adb0273b0a3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730140457023930074,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1837860eebdbee666a5bf48582978405,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da,PodSandboxId:58b3ae9b9ad906473a66f1cb8a04a5bdb1fd0ce06e8bf0b73c8ebae6924bba62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730140456985391473,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47fc9a7e4f63e8faeda19e3f88f4a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49,PodSandboxId:379268ef9c69e71f2d16907ff682ebd8572ef74c8fe03a48d08f272b61e65516,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730140456933263711,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46,PodSandboxId:41009c46e8497d47055e4f38fce49b84ce731c88123f53511522b24c860dedfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730140169496363906,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-021370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5af8521957e359b93e9c03519eda4b4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86ab504c-ee41-437c-9a48-7b9738714dac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	006aada4f30c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   413e67bcd3b89       storage-provisioner
	104264eddd009       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   7fbafc94d70e0       coredns-7c65d6cfc9-qw5gl
	71661974f7b38       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   787d8ca532edb       coredns-7c65d6cfc9-d5pk8
	ee22f5ea76449       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   14 minutes ago      Running             kube-proxy                0                   b0f9161c9eb29       kube-proxy-nrr6g
	923e774fae799       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   15 minutes ago      Running             kube-scheduler            2                   46f3ff135b802       kube-scheduler-embed-certs-021370
	84f43ce11e608       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   977cc130e7084       etcd-embed-certs-021370
	3f031e8707fea       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   15 minutes ago      Running             kube-controller-manager   2                   58b3ae9b9ad90       kube-controller-manager-embed-certs-021370
	d269f62b266bb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   15 minutes ago      Running             kube-apiserver            2                   379268ef9c69e       kube-apiserver-embed-certs-021370
	f7431ff218449       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   19 minutes ago      Exited              kube-apiserver            1                   41009c46e8497       kube-apiserver-embed-certs-021370
	
	
	==> coredns [104264eddd0099c46d05350399838e9f95be235a84933cab50c21df472d3034b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [71661974f7b38ef9ecd7e94d45ab3da66f64411e442b468e568dac58aec17f2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-021370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-021370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
	                    minikube.k8s.io/name=embed-certs-021370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 18:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-021370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 18:49:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 18:44:45 +0000   Mon, 28 Oct 2024 18:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 18:44:45 +0000   Mon, 28 Oct 2024 18:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 18:44:45 +0000   Mon, 28 Oct 2024 18:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 18:44:45 +0000   Mon, 28 Oct 2024 18:34:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.62
	  Hostname:    embed-certs-021370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e43992f2590c4869aa99fe323aa72fba
	  System UUID:                e43992f2-590c-4869-aa99-fe323aa72fba
	  Boot ID:                    e1a99776-ff86-4bdc-98df-70ca9124588c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-d5pk8                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-qw5gl                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-embed-certs-021370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-021370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-021370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-nrr6g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-embed-certs-021370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-hpwrm               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-021370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-021370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-021370 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node embed-certs-021370 event: Registered Node embed-certs-021370 in Controller
	
	
	==> dmesg <==
	[  +0.051294] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040972] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.136975] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.498753] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.648113] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.698016] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.077008] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056632] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.182596] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.148543] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.300354] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.011500] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.252435] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.071622] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.559869] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.959680] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 18:34] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.641044] systemd-fstab-generator[2622]: Ignoring "noauto" option for root device
	[  +4.527852] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.518295] systemd-fstab-generator[2942]: Ignoring "noauto" option for root device
	[  +5.484211] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +0.107812] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.540848] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [84f43ce11e6083d71fb32b14462d1bfbdac1e2d7e52b03c6e62cc3357db0838f] <==
	{"level":"info","ts":"2024-10-28T18:34:17.590261Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:34:17.598257Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.62:2379"}
	{"level":"info","ts":"2024-10-28T18:34:17.598828Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:34:17.598901Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:34:17.599342Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T18:34:17.600041Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T18:34:17.600345Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T18:44:17.742622Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-10-28T18:44:17.750965Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":689,"took":"7.526665ms","hash":791884138,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-10-28T18:44:17.751060Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":791884138,"revision":689,"compact-revision":-1}
	{"level":"warn","ts":"2024-10-28T18:48:42.092083Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16111507632921624053,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-10-28T18:48:42.159014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.75225ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:48:42.159216Z","caller":"traceutil/trace.go:171","msg":"trace[1896516715] linearizableReadLoop","detail":"{readStateIndex:1331; appliedIndex:1330; }","duration":"567.645033ms","start":"2024-10-28T18:48:41.591439Z","end":"2024-10-28T18:48:42.159084Z","steps":["trace[1896516715] 'read index received'  (duration: 567.359604ms)","trace[1896516715] 'applied index is now lower than readState.Index'  (duration: 284.444µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T18:48:42.159790Z","caller":"traceutil/trace.go:171","msg":"trace[850347912] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1146; }","duration":"197.93531ms","start":"2024-10-28T18:48:41.961171Z","end":"2024-10-28T18:48:42.159106Z","steps":["trace[850347912] 'range keys from in-memory index tree'  (duration: 197.740918ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:48:42.159864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"568.424804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2024-10-28T18:48:42.159924Z","caller":"traceutil/trace.go:171","msg":"trace[1440621684] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1147; }","duration":"568.488699ms","start":"2024-10-28T18:48:41.591414Z","end":"2024-10-28T18:48:42.159902Z","steps":["trace[1440621684] 'agreement among raft nodes before linearized reading'  (duration: 568.27503ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T18:48:42.159884Z","caller":"traceutil/trace.go:171","msg":"trace[1825772118] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"710.148981ms","start":"2024-10-28T18:48:41.448973Z","end":"2024-10-28T18:48:42.159122Z","steps":["trace[1825772118] 'process raft request'  (duration: 709.864257ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:48:42.159957Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:48:41.591366Z","time spent":"568.579938ms","remote":"127.0.0.1:53716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1141,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-10-28T18:48:42.161101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:48:41.448947Z","time spent":"711.034712ms","remote":"127.0.0.1:53594","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.62\" mod_revision:1139 > success:<request_put:<key:\"/registry/masterleases/192.168.50.62\" value_size:66 lease:6888135596066848246 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.62\" > >"}
	{"level":"warn","ts":"2024-10-28T18:48:42.161402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.170936ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T18:48:42.161618Z","caller":"traceutil/trace.go:171","msg":"trace[510355024] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"373.410412ms","start":"2024-10-28T18:48:41.788200Z","end":"2024-10-28T18:48:42.161611Z","steps":["trace[510355024] 'agreement among raft nodes before linearized reading'  (duration: 373.148346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T18:48:42.161648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T18:48:41.788161Z","time spent":"373.479315ms","remote":"127.0.0.1:53726","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-28T18:49:17.749960Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
	{"level":"info","ts":"2024-10-28T18:49:17.753838Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":932,"took":"3.549841ms","hash":2102189546,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-28T18:49:17.753900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2102189546,"revision":932,"compact-revision":689}
	
	
	==> kernel <==
	 18:49:23 up 20 min,  0 users,  load average: 0.01, 0.05, 0.08
	Linux embed-certs-021370 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d269f62b266bb332fdb88de8d0e9ea6e8df3ec1dfabfbb291f109ccbeaa01c49] <==
	I1028 18:45:20.690389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:45:20.690470       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:47:20.691243       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:47:20.691404       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 18:47:20.691676       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:47:20.691730       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 18:47:20.692595       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:47:20.693666       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 18:49:19.693951       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:49:19.694397       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 18:49:20.696018       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 18:49:20.696072       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 18:49:20.696379       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1028 18:49:20.696457       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 18:49:20.698444       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 18:49:20.698524       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f7431ff21844903bbdaf09ea45144e1181b9d9a28323c3423880d59ab1102c46] <==
	W1028 18:34:09.248068       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.249537       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.263327       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.338448       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.343944       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.355980       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.405406       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.459860       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.464309       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.479794       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.494416       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.531116       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.736328       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.759522       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.788096       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:09.837471       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.137101       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.221676       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.231215       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.247770       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.307447       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.320449       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.326083       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:10.518661       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 18:34:12.441848       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3f031e8707fea6a92affc0a5808a2690bf1480f38449f9046cf8a04783b941da] <==
	E1028 18:43:56.738275       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:43:57.178399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:44:26.744732       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:44:27.186732       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:44:45.162602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-021370"
	E1028 18:44:56.751043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:44:57.194514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:45:26.757718       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:45:27.204318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 18:45:32.221006       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="223.059µs"
	I1028 18:45:46.216526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="87.445µs"
	E1028 18:45:56.764728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:45:57.212853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:46:26.771719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:46:27.221073       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:46:56.779918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:46:57.229532       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:47:26.786798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:47:27.236951       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:47:56.794050       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:47:57.245214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:48:26.799948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:48:27.253701       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 18:48:56.807916       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 18:48:57.265202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ee22f5ea764491f941ec22e5e80e4134a3785ec84715304075fe4a9a06edd2bd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 18:34:29.300499       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 18:34:29.380768       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.62"]
	E1028 18:34:29.419670       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 18:34:29.525141       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 18:34:29.525218       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 18:34:29.525329       1 server_linux.go:169] "Using iptables Proxier"
	I1028 18:34:29.529934       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 18:34:29.530152       1 server.go:483] "Version info" version="v1.31.2"
	I1028 18:34:29.530316       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 18:34:29.533020       1 config.go:199] "Starting service config controller"
	I1028 18:34:29.536722       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 18:34:29.534916       1 config.go:105] "Starting endpoint slice config controller"
	I1028 18:34:29.536795       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 18:34:29.535461       1 config.go:328] "Starting node config controller"
	I1028 18:34:29.536829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 18:34:29.637253       1 shared_informer.go:320] Caches are synced for node config
	I1028 18:34:29.637339       1 shared_informer.go:320] Caches are synced for service config
	I1028 18:34:29.637384       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [923e774fae7993be1bc1da1623ea17c6f25eb42ad617cc22ffc917b89273ea41] <==
	E1028 18:34:19.708190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:19.707376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 18:34:19.708247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:19.706864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:34:19.708264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1028 18:34:19.708093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.546747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 18:34:20.546782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.550278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 18:34:20.550304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.566598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 18:34:20.566683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.642152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 18:34:20.642390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.643497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 18:34:20.643548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.732873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 18:34:20.733140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.733942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 18:34:20.734135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:20.786549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 18:34:20.786640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 18:34:21.063446       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 18:34:21.063496       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1028 18:34:23.196700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 18:48:22 embed-certs-021370 kubelet[2949]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 18:48:22 embed-certs-021370 kubelet[2949]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 18:48:22 embed-certs-021370 kubelet[2949]: E1028 18:48:22.460609    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141302460245236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:22 embed-certs-021370 kubelet[2949]: E1028 18:48:22.460632    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141302460245236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:32 embed-certs-021370 kubelet[2949]: E1028 18:48:32.201717    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:48:32 embed-certs-021370 kubelet[2949]: E1028 18:48:32.462714    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141312462245985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:32 embed-certs-021370 kubelet[2949]: E1028 18:48:32.462753    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141312462245985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:42 embed-certs-021370 kubelet[2949]: E1028 18:48:42.465275    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141322464744632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:42 embed-certs-021370 kubelet[2949]: E1028 18:48:42.465386    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141322464744632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:45 embed-certs-021370 kubelet[2949]: E1028 18:48:45.200906    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:48:52 embed-certs-021370 kubelet[2949]: E1028 18:48:52.467664    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141332467006770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:52 embed-certs-021370 kubelet[2949]: E1028 18:48:52.468007    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141332467006770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:48:58 embed-certs-021370 kubelet[2949]: E1028 18:48:58.201762    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:49:02 embed-certs-021370 kubelet[2949]: E1028 18:49:02.470190    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141342469854485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:02 embed-certs-021370 kubelet[2949]: E1028 18:49:02.470241    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141342469854485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:12 embed-certs-021370 kubelet[2949]: E1028 18:49:12.202600    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hpwrm" podUID="224f97d8-b44f-4392-a46b-c134004c061a"
	Oct 28 18:49:12 embed-certs-021370 kubelet[2949]: E1028 18:49:12.471925    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141352471509606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:12 embed-certs-021370 kubelet[2949]: E1028 18:49:12.471953    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141352471509606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:22 embed-certs-021370 kubelet[2949]: E1028 18:49:22.223148    2949 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 18:49:22 embed-certs-021370 kubelet[2949]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 18:49:22 embed-certs-021370 kubelet[2949]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 18:49:22 embed-certs-021370 kubelet[2949]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 18:49:22 embed-certs-021370 kubelet[2949]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 18:49:22 embed-certs-021370 kubelet[2949]: E1028 18:49:22.474094    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141362473512839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 18:49:22 embed-certs-021370 kubelet[2949]: E1028 18:49:22.474128    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141362473512839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [006aada4f30c5ace8f2a706506bc76e24a2ed46ae4d0484ceb2de2ae9e376c92] <==
	I1028 18:34:29.298734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 18:34:29.345011       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 18:34:29.345100       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 18:34:29.449061       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 18:34:29.452467       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-021370_58a5ed82-70b3-4caf-82ff-0532950f2f11!
	I1028 18:34:29.467770       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f36b46b-aaf8-4653-8eec-b712cce1fd67", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-021370_58a5ed82-70b3-4caf-82ff-0532950f2f11 became leader
	I1028 18:34:29.553845       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-021370_58a5ed82-70b3-4caf-82ff-0532950f2f11!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021370 -n embed-certs-021370
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-021370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hpwrm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-021370 describe pod metrics-server-6867b74b74-hpwrm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-021370 describe pod metrics-server-6867b74b74-hpwrm: exit status 1 (58.012049ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hpwrm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-021370 describe pod metrics-server-6867b74b74-hpwrm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (341.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (136.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.194:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.194:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (232.372002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-223868" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-223868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-223868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.994µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-223868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (215.756067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-223868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-223868 logs -n 25: (1.459772251s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:18 UTC | 28 Oct 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-703793                              | running-upgrade-703793       | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-192352                           | kubernetes-upgrade-192352    | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:19 UTC |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:19 UTC | 28 Oct 24 18:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-021370            | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC | 28 Oct 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-051152             | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-559364                              | cert-expiration-559364       | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-976691 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:21 UTC |
	|         | disable-driver-mounts-976691                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:21 UTC | 28 Oct 24 18:22 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-223868        | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-692033  | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-021370                 | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-021370                                  | embed-certs-021370           | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-051152                  | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-051152                                   | no-preload-051152            | jenkins | v1.34.0 | 28 Oct 24 18:23 UTC | 28 Oct 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-223868             | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC | 28 Oct 24 18:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-223868                              | old-k8s-version-223868       | jenkins | v1.34.0 | 28 Oct 24 18:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-692033       | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-692033 | jenkins | v1.34.0 | 28 Oct 24 18:25 UTC | 28 Oct 24 18:34 UTC |
	|         | default-k8s-diff-port-692033                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 18:25:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 18:25:35.146308   67489 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:25:35.146467   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146474   67489 out.go:358] Setting ErrFile to fd 2...
	I1028 18:25:35.146480   67489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:25:35.146973   67489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:25:35.147825   67489 out.go:352] Setting JSON to false
	I1028 18:25:35.148718   67489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7678,"bootTime":1730132257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:25:35.148810   67489 start.go:139] virtualization: kvm guest
	I1028 18:25:35.150695   67489 out.go:177] * [default-k8s-diff-port-692033] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:25:35.151797   67489 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:25:35.151797   67489 notify.go:220] Checking for updates...
	I1028 18:25:35.154193   67489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:25:35.155491   67489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:25:35.156576   67489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:25:35.157619   67489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:25:35.158702   67489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:25:35.160202   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:25:35.160602   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.160658   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.175095   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I1028 18:25:35.175421   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.175848   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.175863   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.176187   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.176387   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.176667   67489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:25:35.177210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:25:35.177325   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:25:35.191270   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I1028 18:25:35.191687   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:25:35.192092   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:25:35.192114   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:25:35.192388   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:25:35.192551   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:25:35.222738   67489 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 18:25:35.223900   67489 start.go:297] selected driver: kvm2
	I1028 18:25:35.223910   67489 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.224018   67489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:25:35.224696   67489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.224770   67489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 18:25:35.238839   67489 install.go:137] /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1028 18:25:35.239228   67489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:25:35.239258   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:25:35.239310   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:25:35.239360   67489 start.go:340] cluster config:
	{Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:25:35.239480   67489 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 18:25:35.241175   67489 out.go:177] * Starting "default-k8s-diff-port-692033" primary control-plane node in "default-k8s-diff-port-692033" cluster
	I1028 18:25:37.248702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:35.242393   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:25:35.242423   67489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 18:25:35.242432   67489 cache.go:56] Caching tarball of preloaded images
	I1028 18:25:35.242504   67489 preload.go:172] Found /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 18:25:35.242517   67489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 18:25:35.242600   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:25:35.242763   67489 start.go:360] acquireMachinesLock for default-k8s-diff-port-692033: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:25:40.320712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:46.400713   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:49.472709   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:55.552712   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:25:58.624703   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:04.704707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:07.776740   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:13.856735   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:16.928744   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:23.008721   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:26.080668   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:32.160706   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:35.232663   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:41.312774   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:44.384739   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:50.464729   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:53.536702   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:26:59.616750   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:02.688719   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:08.768731   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:11.840771   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:17.920756   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:20.992753   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:27.072785   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:30.144726   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:36.224704   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:39.296825   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:45.376692   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:48.448699   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:54.528707   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:27:57.600754   66600 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.62:22: connect: no route to host
	I1028 18:28:00.605468   66801 start.go:364] duration metric: took 4m12.368996576s to acquireMachinesLock for "no-preload-051152"
	I1028 18:28:00.605517   66801 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:00.605525   66801 fix.go:54] fixHost starting: 
	I1028 18:28:00.605815   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:00.605850   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:00.621828   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1028 18:28:00.622237   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:00.622654   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:28:00.622674   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:00.622975   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:00.623150   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:00.623272   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:28:00.624880   66801 fix.go:112] recreateIfNeeded on no-preload-051152: state=Stopped err=<nil>
	I1028 18:28:00.624910   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	W1028 18:28:00.625076   66801 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:00.627065   66801 out.go:177] * Restarting existing kvm2 VM for "no-preload-051152" ...
	I1028 18:28:00.603089   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:00.603122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603425   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:28:00.603450   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:28:00.603663   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:28:00.605343   66600 machine.go:96] duration metric: took 4m37.432159141s to provisionDockerMachine
	I1028 18:28:00.605380   66600 fix.go:56] duration metric: took 4m37.452432846s for fixHost
	I1028 18:28:00.605387   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 4m37.452449736s
	W1028 18:28:00.605419   66600 start.go:714] error starting host: provision: host is not running
	W1028 18:28:00.605517   66600 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 18:28:00.605528   66600 start.go:729] Will try again in 5 seconds ...
	I1028 18:28:00.628172   66801 main.go:141] libmachine: (no-preload-051152) Calling .Start
	I1028 18:28:00.628308   66801 main.go:141] libmachine: (no-preload-051152) Ensuring networks are active...
	I1028 18:28:00.629123   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network default is active
	I1028 18:28:00.629467   66801 main.go:141] libmachine: (no-preload-051152) Ensuring network mk-no-preload-051152 is active
	I1028 18:28:00.629782   66801 main.go:141] libmachine: (no-preload-051152) Getting domain xml...
	I1028 18:28:00.630687   66801 main.go:141] libmachine: (no-preload-051152) Creating domain...
	I1028 18:28:01.819872   66801 main.go:141] libmachine: (no-preload-051152) Waiting to get IP...
	I1028 18:28:01.820792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:01.821214   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:01.821287   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:01.821204   68016 retry.go:31] will retry after 269.081621ms: waiting for machine to come up
	I1028 18:28:02.091799   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.092220   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.092242   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.092175   68016 retry.go:31] will retry after 341.926163ms: waiting for machine to come up
	I1028 18:28:02.435679   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.436035   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.436067   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.435982   68016 retry.go:31] will retry after 355.739166ms: waiting for machine to come up
	I1028 18:28:02.793549   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:02.793928   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:02.793953   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:02.793881   68016 retry.go:31] will retry after 496.396184ms: waiting for machine to come up
	I1028 18:28:05.607678   66600 start.go:360] acquireMachinesLock for embed-certs-021370: {Name:mkc11d142cf79f0e8b8cc496582ecd67471b29b5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 18:28:03.291568   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.292038   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.292068   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.291978   68016 retry.go:31] will retry after 561.311245ms: waiting for machine to come up
	I1028 18:28:03.854782   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:03.855137   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:03.855166   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:03.855088   68016 retry.go:31] will retry after 574.675969ms: waiting for machine to come up
	I1028 18:28:04.431784   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:04.432226   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:04.432250   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:04.432177   68016 retry.go:31] will retry after 1.028136295s: waiting for machine to come up
	I1028 18:28:05.461477   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:05.461839   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:05.461869   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:05.461795   68016 retry.go:31] will retry after 955.343831ms: waiting for machine to come up
	I1028 18:28:06.418161   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:06.418629   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:06.418659   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:06.418576   68016 retry.go:31] will retry after 1.615930502s: waiting for machine to come up
	I1028 18:28:08.036275   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:08.036641   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:08.036662   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:08.036615   68016 retry.go:31] will retry after 2.111463198s: waiting for machine to come up
	I1028 18:28:10.150891   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:10.151403   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:10.151429   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:10.151351   68016 retry.go:31] will retry after 2.35232289s: waiting for machine to come up
	I1028 18:28:12.506070   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:12.506471   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:12.506494   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:12.506447   68016 retry.go:31] will retry after 2.874687772s: waiting for machine to come up
	I1028 18:28:15.384360   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:15.384680   66801 main.go:141] libmachine: (no-preload-051152) DBG | unable to find current IP address of domain no-preload-051152 in network mk-no-preload-051152
	I1028 18:28:15.384712   66801 main.go:141] libmachine: (no-preload-051152) DBG | I1028 18:28:15.384636   68016 retry.go:31] will retry after 3.299950406s: waiting for machine to come up
	I1028 18:28:19.893083   67149 start.go:364] duration metric: took 3m43.747535803s to acquireMachinesLock for "old-k8s-version-223868"
	I1028 18:28:19.893161   67149 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:19.893170   67149 fix.go:54] fixHost starting: 
	I1028 18:28:19.893556   67149 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:19.893608   67149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:19.909857   67149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I1028 18:28:19.910215   67149 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:19.910669   67149 main.go:141] libmachine: Using API Version  1
	I1028 18:28:19.910690   67149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:19.911049   67149 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:19.911241   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:19.911395   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetState
	I1028 18:28:19.912825   67149 fix.go:112] recreateIfNeeded on old-k8s-version-223868: state=Stopped err=<nil>
	I1028 18:28:19.912856   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	W1028 18:28:19.912996   67149 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:19.915041   67149 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-223868" ...
	I1028 18:28:19.916422   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .Start
	I1028 18:28:19.916611   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring networks are active...
	I1028 18:28:19.917295   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network default is active
	I1028 18:28:19.917560   67149 main.go:141] libmachine: (old-k8s-version-223868) Ensuring network mk-old-k8s-version-223868 is active
	I1028 18:28:19.917951   67149 main.go:141] libmachine: (old-k8s-version-223868) Getting domain xml...
	I1028 18:28:19.918628   67149 main.go:141] libmachine: (old-k8s-version-223868) Creating domain...
	I1028 18:28:18.688243   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.688710   66801 main.go:141] libmachine: (no-preload-051152) Found IP for machine: 192.168.61.78
	I1028 18:28:18.688738   66801 main.go:141] libmachine: (no-preload-051152) Reserving static IP address...
	I1028 18:28:18.688754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has current primary IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.689151   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.689174   66801 main.go:141] libmachine: (no-preload-051152) Reserved static IP address: 192.168.61.78
	I1028 18:28:18.689188   66801 main.go:141] libmachine: (no-preload-051152) DBG | skip adding static IP to network mk-no-preload-051152 - found existing host DHCP lease matching {name: "no-preload-051152", mac: "52:54:00:00:67:79", ip: "192.168.61.78"}
	I1028 18:28:18.689198   66801 main.go:141] libmachine: (no-preload-051152) Waiting for SSH to be available...
	I1028 18:28:18.689217   66801 main.go:141] libmachine: (no-preload-051152) DBG | Getting to WaitForSSH function...
	I1028 18:28:18.691372   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691721   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.691754   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.691861   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH client type: external
	I1028 18:28:18.691890   66801 main.go:141] libmachine: (no-preload-051152) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa (-rw-------)
	I1028 18:28:18.691950   66801 main.go:141] libmachine: (no-preload-051152) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:18.691967   66801 main.go:141] libmachine: (no-preload-051152) DBG | About to run SSH command:
	I1028 18:28:18.691979   66801 main.go:141] libmachine: (no-preload-051152) DBG | exit 0
	I1028 18:28:18.816169   66801 main.go:141] libmachine: (no-preload-051152) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:18.816571   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetConfigRaw
	I1028 18:28:18.817209   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:18.819569   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.819891   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.819913   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.820164   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/config.json ...
	I1028 18:28:18.820375   66801 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:18.820392   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:18.820618   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.822580   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.822953   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.822983   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.823096   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.823250   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823390   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.823537   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.823687   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.823878   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.823890   66801 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:18.932489   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:18.932516   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.932769   66801 buildroot.go:166] provisioning hostname "no-preload-051152"
	I1028 18:28:18.932798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:18.933003   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:18.935565   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.935938   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:18.935965   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:18.936147   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:18.936346   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936513   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:18.936674   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:18.936838   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:18.936994   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:18.937006   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-051152 && echo "no-preload-051152" | sudo tee /etc/hostname
	I1028 18:28:19.057840   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-051152
	
	I1028 18:28:19.057872   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.060536   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.060917   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.060946   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.061068   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.061237   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061405   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.061544   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.061700   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.061848   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.061863   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-051152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-051152/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-051152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:19.180890   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:19.180920   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:19.180957   66801 buildroot.go:174] setting up certificates
	I1028 18:28:19.180971   66801 provision.go:84] configureAuth start
	I1028 18:28:19.180985   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetMachineName
	I1028 18:28:19.181299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.183792   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184144   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.184172   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.184309   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.186298   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186588   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.186616   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.186722   66801 provision.go:143] copyHostCerts
	I1028 18:28:19.186790   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:19.186804   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:19.186868   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:19.186974   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:19.186986   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:19.187023   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:19.187107   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:19.187115   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:19.187146   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:19.187197   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.no-preload-051152 san=[127.0.0.1 192.168.61.78 localhost minikube no-preload-051152]
	I1028 18:28:19.275109   66801 provision.go:177] copyRemoteCerts
	I1028 18:28:19.275175   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:19.275200   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.278392   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.278946   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.278978   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.279183   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.279454   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.279651   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.279789   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.362094   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:19.384635   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:28:19.406649   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:19.428807   66801 provision.go:87] duration metric: took 247.825267ms to configureAuth
	I1028 18:28:19.428830   66801 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:19.429026   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:28:19.429090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.431615   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.431928   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.431954   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.432090   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.432278   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432434   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.432602   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.432786   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.432932   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.432946   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:19.655137   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:19.655163   66801 machine.go:96] duration metric: took 834.775161ms to provisionDockerMachine
	I1028 18:28:19.655175   66801 start.go:293] postStartSetup for "no-preload-051152" (driver="kvm2")
	I1028 18:28:19.655185   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:19.655199   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.655509   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:19.655532   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.658099   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658411   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.658442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.658566   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.658744   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.658884   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.659013   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.743030   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:19.746986   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:19.747007   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:19.747081   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:19.747177   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:19.747290   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:19.756378   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:19.779243   66801 start.go:296] duration metric: took 124.056855ms for postStartSetup
	I1028 18:28:19.779283   66801 fix.go:56] duration metric: took 19.173756385s for fixHost
	I1028 18:28:19.779305   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.781887   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782205   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.782226   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.782367   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.782557   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782709   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.782836   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.782999   66801 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:19.783180   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.78 22 <nil> <nil>}
	I1028 18:28:19.783191   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:19.892920   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140099.866892804
	
	I1028 18:28:19.892944   66801 fix.go:216] guest clock: 1730140099.866892804
	I1028 18:28:19.892954   66801 fix.go:229] Guest: 2024-10-28 18:28:19.866892804 +0000 UTC Remote: 2024-10-28 18:28:19.779287594 +0000 UTC m=+271.674302547 (delta=87.60521ms)
	I1028 18:28:19.892997   66801 fix.go:200] guest clock delta is within tolerance: 87.60521ms
	I1028 18:28:19.893008   66801 start.go:83] releasing machines lock for "no-preload-051152", held for 19.287505767s
	I1028 18:28:19.893034   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.893299   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:19.895775   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896177   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.896204   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.896362   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.896826   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897023   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:28:19.897133   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:19.897171   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.897267   66801 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:19.897291   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:28:19.899703   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.899995   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900031   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900054   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900208   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900374   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900416   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:19.900442   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:19.900550   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.900626   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:28:19.900707   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.900818   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:28:19.900944   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:28:19.901098   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:28:19.982201   66801 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:20.008913   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:20.157816   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:20.165773   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:20.165837   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:20.187342   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:20.187359   66801 start.go:495] detecting cgroup driver to use...
	I1028 18:28:20.187423   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:20.204825   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:20.220702   66801 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:20.220776   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:20.238812   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:20.253664   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:20.363567   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:20.534475   66801 docker.go:233] disabling docker service ...
	I1028 18:28:20.534564   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:20.548424   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:20.564292   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:20.687135   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:20.796225   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:20.810327   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:20.828804   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:28:20.828866   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.838719   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:20.838768   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.849166   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.862811   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.875223   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:20.885402   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.895602   66801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.914163   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:20.924194   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:20.934907   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:20.934958   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:20.948898   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:20.958955   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:21.069438   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:21.175294   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:21.175379   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:21.179886   66801 start.go:563] Will wait 60s for crictl version
	I1028 18:28:21.179942   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.184195   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:21.226939   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:21.227043   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.254702   66801 ssh_runner.go:195] Run: crio --version
	I1028 18:28:21.284607   66801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:28:21.285906   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetIP
	I1028 18:28:21.288560   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.288918   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:28:21.288945   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:28:21.289132   66801 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:21.293108   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:21.307303   66801 kubeadm.go:883] updating cluster {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:21.307447   66801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:28:21.307495   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:21.347493   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:28:21.347520   66801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:21.347595   66801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.347609   66801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.347621   66801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.347656   66801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.347690   66801 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 18:28:21.347691   66801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.347758   66801 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.347695   66801 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349312   66801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.349387   66801 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 18:28:21.349402   66801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.349526   66801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:21.349574   66801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.349582   66801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.349632   66801 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.349311   66801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.515246   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.515760   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.543817   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 18:28:21.551755   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.562433   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.594208   66801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 18:28:21.594257   66801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.594291   66801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 18:28:21.594317   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.594323   66801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.594364   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.666046   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.666654   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.757831   66801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 18:28:21.757867   66801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.757867   66801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 18:28:21.757894   66801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.757914   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757926   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.757937   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.757982   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.758142   66801 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 18:28:21.758161   66801 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 18:28:21.758197   66801 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.758169   66801 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.758234   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.758270   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:21.813746   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:21.813792   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.813836   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.813837   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.813840   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.813890   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.934434   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 18:28:21.958229   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:21.958287   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 18:28:21.958377   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:21.958381   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:21.958467   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.053179   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 18:28:22.053304   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.053351   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 18:28:22.053447   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:22.087756   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 18:28:22.087762   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 18:28:22.087826   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 18:28:22.087867   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 18:28:22.087897   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 18:28:22.087907   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087938   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 18:28:22.087942   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 18:28:22.161136   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 18:28:22.161259   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:22.201924   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 18:28:22.201967   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 18:28:22.202032   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:22.202068   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:21.207941   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting to get IP...
	I1028 18:28:21.209066   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.209518   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.209604   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.209495   68155 retry.go:31] will retry after 258.02952ms: waiting for machine to come up
	I1028 18:28:21.468599   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.469034   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.469052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.468996   68155 retry.go:31] will retry after 389.053264ms: waiting for machine to come up
	I1028 18:28:21.859493   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:21.859987   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:21.860017   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:21.859929   68155 retry.go:31] will retry after 454.438888ms: waiting for machine to come up
	I1028 18:28:22.315484   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.315961   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.315988   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.315904   68155 retry.go:31] will retry after 531.549561ms: waiting for machine to come up
	I1028 18:28:22.849247   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:22.849736   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:22.849791   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:22.849693   68155 retry.go:31] will retry after 602.202649ms: waiting for machine to come up
	I1028 18:28:23.453311   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:23.453859   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:23.453887   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:23.453796   68155 retry.go:31] will retry after 836.622626ms: waiting for machine to come up
	I1028 18:28:24.291959   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:24.292286   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:24.292315   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:24.292252   68155 retry.go:31] will retry after 1.187276744s: waiting for machine to come up
	I1028 18:28:25.480962   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:25.481398   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:25.481417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:25.481350   68155 retry.go:31] will retry after 1.417127806s: waiting for machine to come up
	I1028 18:28:23.586400   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.127903   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3: (2.040063682s)
	I1028 18:28:24.127962   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 18:28:24.127967   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (1.966690859s)
	I1028 18:28:24.127991   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 18:28:24.128010   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.925953727s)
	I1028 18:28:24.128034   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.925947261s)
	I1028 18:28:24.128041   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 18:28:24.128048   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 18:28:24.127904   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.03994028s)
	I1028 18:28:24.128069   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:24.128085   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 18:28:24.128109   66801 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 18:28:24.128123   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.128138   66801 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:24.128166   66801 ssh_runner.go:195] Run: which crictl
	I1028 18:28:24.128180   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 18:28:24.132734   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 18:28:26.097200   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.9689964s)
	I1028 18:28:26.097240   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 18:28:26.097241   66801 ssh_runner.go:235] Completed: which crictl: (1.969052863s)
	I1028 18:28:26.097264   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.097308   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:26.097309   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 18:28:26.900944   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:26.901481   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:26.901511   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:26.901426   68155 retry.go:31] will retry after 1.766762252s: waiting for machine to come up
	I1028 18:28:28.670334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:28.670798   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:28.670827   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:28.670742   68155 retry.go:31] will retry after 2.287152926s: waiting for machine to come up
	I1028 18:28:30.959639   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:30.959947   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:30.959963   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:30.959917   68155 retry.go:31] will retry after 1.799223833s: waiting for machine to come up
	I1028 18:28:28.165293   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.067952153s)
	I1028 18:28:28.165410   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:28.165497   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.068111312s)
	I1028 18:28:28.165523   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 18:28:28.165548   66801 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.165591   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 18:28:28.208189   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:30.152411   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.986796263s)
	I1028 18:28:30.152458   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 18:28:30.152496   66801 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152504   66801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.944281988s)
	I1028 18:28:30.152550   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 18:28:30.152556   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 18:28:30.152652   66801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:32.761498   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:32.761941   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:32.761968   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:32.761894   68155 retry.go:31] will retry after 2.231065891s: waiting for machine to come up
	I1028 18:28:34.994438   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:34.994902   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | unable to find current IP address of domain old-k8s-version-223868 in network mk-old-k8s-version-223868
	I1028 18:28:34.994936   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | I1028 18:28:34.994847   68155 retry.go:31] will retry after 4.079794439s: waiting for machine to come up
	I1028 18:28:33.842059   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.689484833s)
	I1028 18:28:33.842109   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 18:28:33.842138   66801 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:33.842155   66801 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.68947822s)
	I1028 18:28:33.842184   66801 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 18:28:33.842206   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 18:28:35.714458   66801 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.872222439s)
	I1028 18:28:35.714493   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 18:28:35.714521   66801 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:35.714567   66801 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 18:28:36.568124   66801 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 18:28:36.568177   66801 cache_images.go:123] Successfully loaded all cached images
	I1028 18:28:36.568185   66801 cache_images.go:92] duration metric: took 15.220649269s to LoadCachedImages
	I1028 18:28:36.568199   66801 kubeadm.go:934] updating node { 192.168.61.78 8443 v1.31.2 crio true true} ...
	I1028 18:28:36.568310   66801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-051152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:36.568383   66801 ssh_runner.go:195] Run: crio config
	I1028 18:28:36.613400   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:36.613425   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:36.613435   66801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:36.613454   66801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.78 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-051152 NodeName:no-preload-051152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:28:36.613596   66801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-051152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:36.613669   66801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:28:36.624493   66801 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:36.624553   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:36.633828   66801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 18:28:36.649661   66801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:36.665454   66801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1028 18:28:36.681280   66801 ssh_runner.go:195] Run: grep 192.168.61.78	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:36.685010   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:36.697177   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:36.823266   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:36.840346   66801 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152 for IP: 192.168.61.78
	I1028 18:28:36.840366   66801 certs.go:194] generating shared ca certs ...
	I1028 18:28:36.840380   66801 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:36.840538   66801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:36.840578   66801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:36.840586   66801 certs.go:256] generating profile certs ...
	I1028 18:28:36.840661   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.key
	I1028 18:28:36.840722   66801 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key.262d982c
	I1028 18:28:36.840758   66801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key
	I1028 18:28:36.840859   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:36.840892   66801 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:36.840902   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:36.840922   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:36.840943   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:36.840971   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:36.841025   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:36.841818   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:36.881548   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:36.907084   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:36.947810   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:36.976268   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 18:28:37.003795   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 18:28:37.036252   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:37.059731   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:28:37.083467   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:37.106397   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:37.128719   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:37.151133   66801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:37.166917   66801 ssh_runner.go:195] Run: openssl version
	I1028 18:28:37.172387   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:37.182117   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186329   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.186389   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:37.191925   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:37.201799   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:37.211620   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215889   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.215923   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:37.221588   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:37.231983   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:37.242291   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246869   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.246904   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:37.252408   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:37.262946   66801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:37.267334   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:37.273164   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:37.278831   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:37.284778   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:37.290547   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:37.296195   66801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:37.301915   66801 kubeadm.go:392] StartCluster: {Name:no-preload-051152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-051152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:37.301986   66801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:37.302037   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.345115   66801 cri.go:89] found id: ""
	I1028 18:28:37.345185   66801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:37.355312   66801 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:37.355328   66801 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:37.355370   66801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:37.364777   66801 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:37.366056   66801 kubeconfig.go:125] found "no-preload-051152" server: "https://192.168.61.78:8443"
	I1028 18:28:37.368829   66801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:37.378010   66801 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.78
	I1028 18:28:37.378039   66801 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:37.378047   66801 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:37.378083   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:37.413442   66801 cri.go:89] found id: ""
	I1028 18:28:37.413522   66801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:37.428998   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:37.438365   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:37.438391   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:37.438442   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:37.447260   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:37.447310   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:37.456615   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:37.465292   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:37.465351   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:37.474382   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.482957   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:37.483012   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:37.491991   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:37.500635   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:37.500709   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:37.509632   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:37.518808   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:37.642796   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:40.421350   67489 start.go:364] duration metric: took 3m5.178550845s to acquireMachinesLock for "default-k8s-diff-port-692033"
	I1028 18:28:40.421416   67489 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:28:40.421430   67489 fix.go:54] fixHost starting: 
	I1028 18:28:40.421843   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:28:40.421894   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:28:40.439583   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I1028 18:28:40.440133   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:28:40.440679   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:28:40.440701   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:28:40.441025   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:28:40.441198   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:40.441359   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:28:40.443029   67489 fix.go:112] recreateIfNeeded on default-k8s-diff-port-692033: state=Stopped err=<nil>
	I1028 18:28:40.443055   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	W1028 18:28:40.443202   67489 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:28:40.445489   67489 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-692033" ...
	I1028 18:28:39.079052   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079556   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has current primary IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.079584   67149 main.go:141] libmachine: (old-k8s-version-223868) Found IP for machine: 192.168.83.194
	I1028 18:28:39.079593   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserving static IP address...
	I1028 18:28:39.079888   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.079919   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | skip adding static IP to network mk-old-k8s-version-223868 - found existing host DHCP lease matching {name: "old-k8s-version-223868", mac: "52:54:00:9d:b8:c9", ip: "192.168.83.194"}
	I1028 18:28:39.079935   67149 main.go:141] libmachine: (old-k8s-version-223868) Reserved static IP address: 192.168.83.194
	I1028 18:28:39.079955   67149 main.go:141] libmachine: (old-k8s-version-223868) Waiting for SSH to be available...
	I1028 18:28:39.079971   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Getting to WaitForSSH function...
	I1028 18:28:39.082041   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082334   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.082354   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.082480   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH client type: external
	I1028 18:28:39.082500   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa (-rw-------)
	I1028 18:28:39.082528   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:39.082555   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | About to run SSH command:
	I1028 18:28:39.082567   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | exit 0
	I1028 18:28:39.204523   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:39.204883   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetConfigRaw
	I1028 18:28:39.205526   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.208073   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208434   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.208478   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.208709   67149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/config.json ...
	I1028 18:28:39.208907   67149 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:39.208926   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:39.209144   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.211109   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211407   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.211437   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.211574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.211739   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.211888   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.212033   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.212218   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.212388   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.212398   67149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:39.316528   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:39.316566   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.316813   67149 buildroot.go:166] provisioning hostname "old-k8s-version-223868"
	I1028 18:28:39.316841   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.317028   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.319389   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319687   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.319713   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.319836   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.320017   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320167   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.320310   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.320458   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.320642   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.320656   67149 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-223868 && echo "old-k8s-version-223868" | sudo tee /etc/hostname
	I1028 18:28:39.439149   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-223868
	
	I1028 18:28:39.439179   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.441957   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442268   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.442300   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.442528   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.442736   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.442940   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.443122   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.443304   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.443525   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.443550   67149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-223868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-223868/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-223868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:39.561619   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:39.561651   67149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:39.561702   67149 buildroot.go:174] setting up certificates
	I1028 18:28:39.561716   67149 provision.go:84] configureAuth start
	I1028 18:28:39.561731   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetMachineName
	I1028 18:28:39.562015   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:39.564838   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565195   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.565229   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.565373   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.567875   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568262   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.568287   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.568452   67149 provision.go:143] copyHostCerts
	I1028 18:28:39.568534   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:39.568553   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:39.568621   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:39.568745   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:39.568768   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:39.568810   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:39.568899   67149 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:39.568911   67149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:39.568937   67149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:39.569006   67149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-223868 san=[127.0.0.1 192.168.83.194 localhost minikube old-k8s-version-223868]
	I1028 18:28:39.786398   67149 provision.go:177] copyRemoteCerts
	I1028 18:28:39.786449   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:39.786482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.789048   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789373   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.789417   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.789535   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.789733   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.789884   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.790013   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:39.871816   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:28:39.902889   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 18:28:39.932633   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:28:39.958581   67149 provision.go:87] duration metric: took 396.851161ms to configureAuth
	I1028 18:28:39.958609   67149 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:28:39.958796   67149 config.go:182] Loaded profile config "old-k8s-version-223868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:28:39.958881   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:39.961667   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962019   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:39.962044   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:39.962240   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:39.962468   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962671   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:39.962850   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:39.963037   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:39.963220   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:39.963239   67149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:28:40.179808   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:28:40.179843   67149 machine.go:96] duration metric: took 970.91659ms to provisionDockerMachine
	I1028 18:28:40.179857   67149 start.go:293] postStartSetup for "old-k8s-version-223868" (driver="kvm2")
	I1028 18:28:40.179869   67149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:28:40.179917   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.180287   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:28:40.180319   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.183011   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183383   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.183411   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.183578   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.183770   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.183964   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.184114   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.270445   67149 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:28:40.275798   67149 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:28:40.275825   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:28:40.275898   67149 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:28:40.275995   67149 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:28:40.276108   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:28:40.287529   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:40.310860   67149 start.go:296] duration metric: took 130.989944ms for postStartSetup
	I1028 18:28:40.310899   67149 fix.go:56] duration metric: took 20.417730265s for fixHost
	I1028 18:28:40.310925   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.313613   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.313967   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.314000   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.314175   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.314354   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314518   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.314692   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.314862   67149 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:40.315021   67149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.83.194 22 <nil> <nil>}
	I1028 18:28:40.315032   67149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:28:40.421204   67149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140120.384024791
	
	I1028 18:28:40.421225   67149 fix.go:216] guest clock: 1730140120.384024791
	I1028 18:28:40.421235   67149 fix.go:229] Guest: 2024-10-28 18:28:40.384024791 +0000 UTC Remote: 2024-10-28 18:28:40.310903937 +0000 UTC m=+244.300202669 (delta=73.120854ms)
	I1028 18:28:40.421259   67149 fix.go:200] guest clock delta is within tolerance: 73.120854ms
	I1028 18:28:40.421265   67149 start.go:83] releasing machines lock for "old-k8s-version-223868", held for 20.528130845s
	I1028 18:28:40.421297   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.421574   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:40.424700   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425088   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.425116   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.425286   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.425971   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426188   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .DriverName
	I1028 18:28:40.426266   67149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:28:40.426340   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.426604   67149 ssh_runner.go:195] Run: cat /version.json
	I1028 18:28:40.426632   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHHostname
	I1028 18:28:40.429408   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429569   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429807   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.429841   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.429950   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430059   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:40.430092   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:40.430123   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430236   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHPort
	I1028 18:28:40.430383   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHKeyPath
	I1028 18:28:40.430459   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430482   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetSSHUsername
	I1028 18:28:40.430616   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.430614   67149 sshutil.go:53] new ssh client: &{IP:192.168.83.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/old-k8s-version-223868/id_rsa Username:docker}
	I1028 18:28:40.509203   67149 ssh_runner.go:195] Run: systemctl --version
	I1028 18:28:40.540019   67149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:28:40.701732   67149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:28:40.710264   67149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:28:40.710354   67149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:28:40.731373   67149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:28:40.731398   67149 start.go:495] detecting cgroup driver to use...
	I1028 18:28:40.731465   67149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:28:40.751312   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:28:40.766288   67149 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:28:40.766399   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:28:40.783995   67149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:28:40.800295   67149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:28:40.940688   67149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:28:41.101493   67149 docker.go:233] disabling docker service ...
	I1028 18:28:41.101562   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:28:41.123350   67149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:28:41.141744   67149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:28:41.279020   67149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:28:41.414748   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:28:41.429469   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:28:41.448611   67149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 18:28:41.448669   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.460766   67149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:28:41.460842   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.473021   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.485888   67149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:28:41.497498   67149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:28:41.509250   67149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:28:41.519701   67149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:28:41.519754   67149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:28:41.534596   67149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:28:41.544814   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:41.681203   67149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:28:41.786879   67149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:28:41.786957   67149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:28:41.791981   67149 start.go:563] Will wait 60s for crictl version
	I1028 18:28:41.792041   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:41.796034   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:28:41.839867   67149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:28:41.839958   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.873029   67149 ssh_runner.go:195] Run: crio --version
	I1028 18:28:41.904534   67149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 18:28:38.508232   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.720400   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.784720   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:38.892007   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:38.892083   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.392953   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.892228   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:39.912702   66801 api_server.go:72] duration metric: took 1.020696043s to wait for apiserver process to appear ...
	I1028 18:28:39.912728   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:28:39.912749   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:39.913221   66801 api_server.go:269] stopped: https://192.168.61.78:8443/healthz: Get "https://192.168.61.78:8443/healthz": dial tcp 192.168.61.78:8443: connect: connection refused
	I1028 18:28:40.413025   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:40.446984   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Start
	I1028 18:28:40.447191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring networks are active...
	I1028 18:28:40.447998   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network default is active
	I1028 18:28:40.448350   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Ensuring network mk-default-k8s-diff-port-692033 is active
	I1028 18:28:40.448884   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Getting domain xml...
	I1028 18:28:40.449664   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Creating domain...
	I1028 18:28:41.740010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting to get IP...
	I1028 18:28:41.740827   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:41.741273   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:41.741192   68341 retry.go:31] will retry after 276.06097ms: waiting for machine to come up
	I1028 18:28:42.018700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019135   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.019159   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.019089   68341 retry.go:31] will retry after 318.252876ms: waiting for machine to come up
	I1028 18:28:42.338630   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339287   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.339312   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.339205   68341 retry.go:31] will retry after 428.196122ms: waiting for machine to come up
	I1028 18:28:42.768656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769225   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:42.769248   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:42.769134   68341 retry.go:31] will retry after 483.256928ms: waiting for machine to come up
	I1028 18:28:43.253739   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254304   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.254353   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.254220   68341 retry.go:31] will retry after 577.932805ms: waiting for machine to come up
	I1028 18:28:43.834355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.834976   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:43.835021   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:43.834945   68341 retry.go:31] will retry after 639.531065ms: waiting for machine to come up
	I1028 18:28:44.475727   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476299   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:44.476331   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:44.476248   68341 retry.go:31] will retry after 1.171398436s: waiting for machine to come up
	I1028 18:28:43.473059   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.473096   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.473113   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.588338   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:28:43.588371   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:28:43.913612   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:43.918557   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:43.918598   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.412902   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.425930   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.425971   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:44.913482   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:44.926092   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:28:44.926126   66801 api_server.go:103] status: https://192.168.61.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:28:45.413673   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:28:45.419384   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:28:45.430384   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:28:45.430431   66801 api_server.go:131] duration metric: took 5.517694037s to wait for apiserver health ...
	I1028 18:28:45.430442   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:28:45.430450   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:45.432587   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:28:41.906005   67149 main.go:141] libmachine: (old-k8s-version-223868) Calling .GetIP
	I1028 18:28:41.909278   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909683   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:b8:c9", ip: ""} in network mk-old-k8s-version-223868: {Iface:virbr4 ExpiryTime:2024-10-28 19:28:31 +0000 UTC Type:0 Mac:52:54:00:9d:b8:c9 Iaid: IPaddr:192.168.83.194 Prefix:24 Hostname:old-k8s-version-223868 Clientid:01:52:54:00:9d:b8:c9}
	I1028 18:28:41.909741   67149 main.go:141] libmachine: (old-k8s-version-223868) DBG | domain old-k8s-version-223868 has defined IP address 192.168.83.194 and MAC address 52:54:00:9d:b8:c9 in network mk-old-k8s-version-223868
	I1028 18:28:41.909996   67149 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1028 18:28:41.915405   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:41.931747   67149 kubeadm.go:883] updating cluster {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:28:41.931886   67149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 18:28:41.931944   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:41.987909   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:41.987966   67149 ssh_runner.go:195] Run: which lz4
	I1028 18:28:41.993527   67149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:28:41.998982   67149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:28:41.999014   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 18:28:43.643480   67149 crio.go:462] duration metric: took 1.649982959s to copy over tarball
	I1028 18:28:43.643559   67149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:28:45.433946   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:28:45.453114   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:28:45.479255   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:28:45.497020   66801 system_pods.go:59] 8 kube-system pods found
	I1028 18:28:45.497072   66801 system_pods.go:61] "coredns-7c65d6cfc9-74b6t" [b6a550da-7c40-4283-b49e-1ab29e652037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:28:45.497084   66801 system_pods.go:61] "etcd-no-preload-051152" [d5b31ded-95ce-4dde-ba88-e653dfdb8d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:28:45.497097   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [95d0acb0-4d58-4307-9f4f-10f920ff4745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:28:45.497105   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [722530e1-1d76-40dc-8a24-fe79d0167835] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:28:45.497112   66801 system_pods.go:61] "kube-proxy-kg42f" [7891354b-a501-45c4-b15c-cf6d29e3721f] Running
	I1028 18:28:45.497121   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [c658808c-79c2-4b8e-b72c-0b2d8e058ab4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:28:45.497130   66801 system_pods.go:61] "metrics-server-6867b74b74-vgd8k" [626b71a2-6904-409f-9274-6963a94e6ac2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:28:45.497137   66801 system_pods.go:61] "storage-provisioner" [39bf84c9-9c6f-4048-8a11-460fb12f622b] Running
	I1028 18:28:45.497146   66801 system_pods.go:74] duration metric: took 17.863894ms to wait for pod list to return data ...
	I1028 18:28:45.497160   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:28:45.501945   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:28:45.501977   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:28:45.501993   66801 node_conditions.go:105] duration metric: took 4.827279ms to run NodePressure ...
	I1028 18:28:45.502014   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:45.835429   66801 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840823   66801 kubeadm.go:739] kubelet initialised
	I1028 18:28:45.840852   66801 kubeadm.go:740] duration metric: took 5.391212ms waiting for restarted kubelet to initialise ...
	I1028 18:28:45.840862   66801 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:28:45.846565   66801 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:45.648994   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649559   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:45.649587   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:45.649512   68341 retry.go:31] will retry after 1.258585317s: waiting for machine to come up
	I1028 18:28:46.909541   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909955   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:46.909982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:46.909911   68341 retry.go:31] will retry after 1.827150306s: waiting for machine to come up
	I1028 18:28:48.738193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738696   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:48.738725   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:48.738653   68341 retry.go:31] will retry after 1.738249889s: waiting for machine to come up
	I1028 18:28:46.758767   67149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115173801s)
	I1028 18:28:46.758810   67149 crio.go:469] duration metric: took 3.115300284s to extract the tarball
	I1028 18:28:46.758821   67149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:28:46.816906   67149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:28:46.864347   67149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 18:28:46.864376   67149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 18:28:46.864499   67149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.864564   67149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.864623   67149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.864639   67149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.864674   67149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.864686   67149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 18:28:46.864710   67149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.864529   67149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:46.866383   67149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:46.866445   67149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:46.866493   67149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:46.866579   67149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:46.866795   67149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:46.867073   67149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:46.867095   67149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 18:28:46.867488   67149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.043358   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.053844   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.055684   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.056812   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.066211   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.090931   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.104900   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 18:28:47.141214   67149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 18:28:47.141260   67149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.141307   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202804   67149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 18:28:47.202863   67149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.202873   67149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 18:28:47.202903   67149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.202915   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.202944   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.234811   67149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 18:28:47.234853   67149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.234900   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.236717   67149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 18:28:47.236751   67149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.236798   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.243872   67149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 18:28:47.243918   67149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.243971   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260210   67149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 18:28:47.260253   67149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 18:28:47.260256   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.260293   67149 ssh_runner.go:195] Run: which crictl
	I1028 18:28:47.260398   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.260438   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.260456   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.260517   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.260559   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413617   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.413776   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.413804   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.413825   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.414063   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.414103   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.414150   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.544933   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 18:28:47.581577   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.582079   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 18:28:47.582161   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 18:28:47.582206   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 18:28:47.582344   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 18:28:47.582819   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 18:28:47.662237   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 18:28:47.736212   67149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 18:28:47.739757   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 18:28:47.739928   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 18:28:47.739802   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 18:28:47.739812   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 18:28:47.739841   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 18:28:47.783578   67149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 18:28:49.121698   67149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:28:49.266583   67149 cache_images.go:92] duration metric: took 2.402188013s to LoadCachedImages
	W1028 18:28:49.266686   67149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19872-13443/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 18:28:49.266702   67149 kubeadm.go:934] updating node { 192.168.83.194 8443 v1.20.0 crio true true} ...
	I1028 18:28:49.266828   67149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-223868 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:28:49.266918   67149 ssh_runner.go:195] Run: crio config
	I1028 18:28:49.318146   67149 cni.go:84] Creating CNI manager for ""
	I1028 18:28:49.318167   67149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:28:49.318176   67149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:28:49.318193   67149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-223868 NodeName:old-k8s-version-223868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 18:28:49.318310   67149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-223868"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:28:49.318371   67149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 18:28:49.329249   67149 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:28:49.329339   67149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:28:49.339379   67149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 18:28:49.359216   67149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:28:49.378289   67149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 18:28:49.397766   67149 ssh_runner.go:195] Run: grep 192.168.83.194	control-plane.minikube.internal$ /etc/hosts
	I1028 18:28:49.401788   67149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:28:49.418204   67149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:28:49.558031   67149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:28:49.575443   67149 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868 for IP: 192.168.83.194
	I1028 18:28:49.575469   67149 certs.go:194] generating shared ca certs ...
	I1028 18:28:49.575489   67149 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:49.575693   67149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:28:49.575746   67149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:28:49.575756   67149 certs.go:256] generating profile certs ...
	I1028 18:28:49.575859   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.key
	I1028 18:28:49.575914   67149 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key.c3f44195
	I1028 18:28:49.575951   67149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key
	I1028 18:28:49.576058   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:28:49.576092   67149 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:28:49.576103   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:28:49.576131   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:28:49.576162   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:28:49.576186   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:28:49.576238   67149 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:28:49.576994   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:28:49.622814   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:28:49.653690   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:28:49.678975   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:28:49.707340   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 18:28:49.744836   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:28:49.776367   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:28:49.818999   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:28:49.847531   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:28:49.871924   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:28:49.897751   67149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:28:49.923267   67149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:28:49.939805   67149 ssh_runner.go:195] Run: openssl version
	I1028 18:28:49.945611   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:28:49.956191   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960862   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.960916   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:28:49.966701   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:28:49.977882   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:28:49.990873   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995751   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:28:49.995810   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:28:50.001891   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:28:50.013508   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:28:50.028132   67149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034144   67149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.034217   67149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:28:50.041768   67149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:28:50.054079   67149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:28:50.058983   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:28:50.064802   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:28:50.070790   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:28:50.077090   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:28:50.083149   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:28:50.089232   67149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:28:50.095205   67149 kubeadm.go:392] StartCluster: {Name:old-k8s-version-223868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-223868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:28:50.095338   67149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:28:50.095411   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.139777   67149 cri.go:89] found id: ""
	I1028 18:28:50.139854   67149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:28:50.151967   67149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:28:50.151986   67149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:28:50.152040   67149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:28:50.163454   67149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:28:50.164876   67149 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-223868" does not appear in /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:28:50.165798   67149 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-13443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-223868" cluster setting kubeconfig missing "old-k8s-version-223868" context setting]
	I1028 18:28:50.167121   67149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:28:50.169545   67149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:28:50.179447   67149 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.194
	I1028 18:28:50.179477   67149 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:28:50.179490   67149 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:28:50.179542   67149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:28:50.213891   67149 cri.go:89] found id: ""
	I1028 18:28:50.213963   67149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:28:50.231491   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:28:50.241752   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:28:50.241775   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:28:50.241829   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:28:50.252015   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:28:50.252075   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:28:50.263032   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:28:50.273500   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:28:50.273564   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:28:50.283603   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.293521   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:28:50.293567   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:28:50.303701   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:28:50.316202   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:28:50.316269   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:28:50.327841   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:28:50.341366   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:50.469586   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:49.414188   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:51.855115   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:50.478658   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479208   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:50.479237   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:50.479151   68341 retry.go:31] will retry after 2.362711935s: waiting for machine to come up
	I1028 18:28:52.842907   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843290   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:52.843314   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:52.843250   68341 retry.go:31] will retry after 2.561710525s: waiting for machine to come up
	I1028 18:28:51.507608   67149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037983659s)
	I1028 18:28:51.507645   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.733141   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.842228   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:28:51.947336   67149 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:28:51.947430   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.447618   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:52.947814   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.448476   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.947571   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.448371   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:54.947700   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.447735   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:55.948435   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:53.857886   66801 pod_ready.go:103] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"False"
	I1028 18:28:54.862972   66801 pod_ready.go:93] pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:54.863005   66801 pod_ready.go:82] duration metric: took 9.016413449s for pod "coredns-7c65d6cfc9-74b6t" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:54.863019   66801 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869043   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:55.869076   66801 pod_ready.go:82] duration metric: took 1.006049217s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.869091   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874842   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.874865   66801 pod_ready.go:82] duration metric: took 2.005766936s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.874875   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878913   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.878930   66801 pod_ready.go:82] duration metric: took 4.049698ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.878937   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889897   66801 pod_ready.go:93] pod "kube-proxy-kg42f" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:57.889913   66801 pod_ready.go:82] duration metric: took 10.971269ms for pod "kube-proxy-kg42f" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:57.889921   66801 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:55.407934   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408336   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | unable to find current IP address of domain default-k8s-diff-port-692033 in network mk-default-k8s-diff-port-692033
	I1028 18:28:55.408362   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | I1028 18:28:55.408274   68341 retry.go:31] will retry after 3.762790995s: waiting for machine to come up
	I1028 18:28:59.173489   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173900   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Found IP for machine: 192.168.39.215
	I1028 18:28:59.173923   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has current primary IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.173929   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserving static IP address...
	I1028 18:28:59.174320   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.174343   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | skip adding static IP to network mk-default-k8s-diff-port-692033 - found existing host DHCP lease matching {name: "default-k8s-diff-port-692033", mac: "52:54:00:89:53:89", ip: "192.168.39.215"}
	I1028 18:28:59.174355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Reserved static IP address: 192.168.39.215
	I1028 18:28:59.174365   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Waiting for SSH to be available...
	I1028 18:28:59.174376   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Getting to WaitForSSH function...
	I1028 18:28:59.176441   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176755   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.176786   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.176913   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH client type: external
	I1028 18:28:59.176936   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa (-rw-------)
	I1028 18:28:59.176958   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:28:59.176970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | About to run SSH command:
	I1028 18:28:59.176982   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | exit 0
	I1028 18:28:59.300272   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | SSH cmd err, output: <nil>: 
	I1028 18:28:59.300649   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetConfigRaw
	I1028 18:28:59.301261   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.303505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.303832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.303857   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.304080   67489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/config.json ...
	I1028 18:28:59.304287   67489 machine.go:93] provisionDockerMachine start ...
	I1028 18:28:59.304310   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:28:59.304535   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.306713   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307008   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.307042   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.307187   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.307348   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307505   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.307627   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.307768   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.307936   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.307946   67489 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:28:59.412710   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:28:59.412743   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413009   67489 buildroot.go:166] provisioning hostname "default-k8s-diff-port-692033"
	I1028 18:28:59.413041   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.413221   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.415772   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416048   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.416070   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.416251   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.416437   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.416728   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.416847   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.417030   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.417041   67489 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-692033 && echo "default-k8s-diff-port-692033" | sudo tee /etc/hostname
	I1028 18:28:59.538491   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-692033
	
	I1028 18:28:59.538518   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.540842   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541144   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.541173   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.541341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.541527   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541684   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.541815   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.541964   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:28:59.542123   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:28:59.542138   67489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-692033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-692033/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-692033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:28:59.657448   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:28:59.657480   67489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:28:59.657524   67489 buildroot.go:174] setting up certificates
	I1028 18:28:59.657539   67489 provision.go:84] configureAuth start
	I1028 18:28:59.657556   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetMachineName
	I1028 18:28:59.657832   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:28:59.660465   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660797   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.660840   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.660949   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.663393   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663801   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.663830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.663977   67489 provision.go:143] copyHostCerts
	I1028 18:28:59.664049   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:28:59.664062   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:28:59.664117   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:28:59.664217   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:28:59.664228   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:28:59.664250   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:28:59.664300   67489 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:28:59.664308   67489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:28:59.664327   67489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:28:59.664403   67489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-692033 san=[127.0.0.1 192.168.39.215 default-k8s-diff-port-692033 localhost minikube]
	I1028 18:28:59.882619   67489 provision.go:177] copyRemoteCerts
	I1028 18:28:59.882672   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:28:59.882695   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:28:59.885303   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:28:59.885686   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:28:59.885927   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:28:59.886121   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:28:59.886278   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:28:59.886382   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:28:59.975231   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:00.000412   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 18:29:00.024424   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 18:29:00.048646   67489 provision.go:87] duration metric: took 391.090444ms to configureAuth
	I1028 18:29:00.048674   67489 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:00.048884   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:00.048970   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.051793   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052156   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.052185   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.052323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.052532   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052729   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.052894   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.053080   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.053241   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.053254   67489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:00.525285   66600 start.go:364] duration metric: took 54.917560334s to acquireMachinesLock for "embed-certs-021370"
	I1028 18:29:00.525349   66600 start.go:96] Skipping create...Using existing machine configuration
	I1028 18:29:00.525359   66600 fix.go:54] fixHost starting: 
	I1028 18:29:00.525740   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:29:00.525778   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:29:00.544614   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I1028 18:29:00.544976   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:29:00.545433   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:29:00.545455   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:29:00.545842   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:29:00.546046   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:00.546230   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:29:00.547770   66600 fix.go:112] recreateIfNeeded on embed-certs-021370: state=Stopped err=<nil>
	I1028 18:29:00.547794   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	W1028 18:29:00.547957   66600 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 18:29:00.549753   66600 out.go:177] * Restarting existing kvm2 VM for "embed-certs-021370" ...
	I1028 18:28:56.447531   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:56.947711   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.447782   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:57.947642   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:58.948256   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.447558   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:28:59.948018   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.448186   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.947565   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:00.280618   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:00.280641   67489 machine.go:96] duration metric: took 976.341252ms to provisionDockerMachine
	I1028 18:29:00.280653   67489 start.go:293] postStartSetup for "default-k8s-diff-port-692033" (driver="kvm2")
	I1028 18:29:00.280669   67489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:00.280690   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.281004   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:00.281044   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.283656   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.283977   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.284010   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.284170   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.284382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.284549   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.284692   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.372947   67489 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:00.377456   67489 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:00.377480   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:00.377547   67489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:00.377646   67489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:00.377762   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:00.388767   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:00.413520   67489 start.go:296] duration metric: took 132.852709ms for postStartSetup
	I1028 18:29:00.413557   67489 fix.go:56] duration metric: took 19.992127182s for fixHost
	I1028 18:29:00.413578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.416040   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416377   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.416405   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.416553   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.416756   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.416930   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.417065   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.417228   67489 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:00.417412   67489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1028 18:29:00.417424   67489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:00.525082   67489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140140.492840769
	
	I1028 18:29:00.525105   67489 fix.go:216] guest clock: 1730140140.492840769
	I1028 18:29:00.525114   67489 fix.go:229] Guest: 2024-10-28 18:29:00.492840769 +0000 UTC Remote: 2024-10-28 18:29:00.413561948 +0000 UTC m=+205.301669628 (delta=79.278821ms)
	I1028 18:29:00.525169   67489 fix.go:200] guest clock delta is within tolerance: 79.278821ms
	I1028 18:29:00.525180   67489 start.go:83] releasing machines lock for "default-k8s-diff-port-692033", held for 20.103791447s
	I1028 18:29:00.525214   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.525495   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:00.528023   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528385   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.528415   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.528578   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529038   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:29:00.529287   67489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:00.529323   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.529380   67489 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:00.529403   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:29:00.531822   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532022   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532163   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532191   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532294   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532443   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:00.532481   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:00.532488   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532612   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:29:00.532680   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.532830   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:29:00.532830   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.532965   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:29:00.533103   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:29:00.609362   67489 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:00.636444   67489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:00.785916   67489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:00.792198   67489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:00.792279   67489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:00.812095   67489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:00.812124   67489 start.go:495] detecting cgroup driver to use...
	I1028 18:29:00.812190   67489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:00.829536   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:00.844021   67489 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:00.844090   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:00.858561   67489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:00.873128   67489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:00.990494   67489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:01.148650   67489 docker.go:233] disabling docker service ...
	I1028 18:29:01.148729   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:01.162487   67489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:01.177407   67489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:01.303665   67489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:01.430019   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:01.443822   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:01.462768   67489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:01.462830   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.473669   67489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:01.473737   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.484364   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.496220   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.507216   67489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:01.518848   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.534216   67489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.554294   67489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:01.565095   67489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:01.574547   67489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:01.574614   67489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:01.596531   67489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:01.606858   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:01.740272   67489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:01.844969   67489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:01.845053   67489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:01.850004   67489 start.go:563] Will wait 60s for crictl version
	I1028 18:29:01.850056   67489 ssh_runner.go:195] Run: which crictl
	I1028 18:29:01.854032   67489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:01.893281   67489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:01.893367   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.923557   67489 ssh_runner.go:195] Run: crio --version
	I1028 18:29:01.956282   67489 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:00.551001   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Start
	I1028 18:29:00.551172   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring networks are active...
	I1028 18:29:00.551820   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network default is active
	I1028 18:29:00.552130   66600 main.go:141] libmachine: (embed-certs-021370) Ensuring network mk-embed-certs-021370 is active
	I1028 18:29:00.552482   66600 main.go:141] libmachine: (embed-certs-021370) Getting domain xml...
	I1028 18:29:00.553186   66600 main.go:141] libmachine: (embed-certs-021370) Creating domain...
	I1028 18:29:01.830016   66600 main.go:141] libmachine: (embed-certs-021370) Waiting to get IP...
	I1028 18:29:01.831046   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:01.831522   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:01.831630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:01.831518   68528 retry.go:31] will retry after 300.306268ms: waiting for machine to come up
	I1028 18:29:02.132901   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.133350   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.133383   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.133293   68528 retry.go:31] will retry after 383.232008ms: waiting for machine to come up
	I1028 18:29:02.518736   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.519274   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.519299   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.519241   68528 retry.go:31] will retry after 354.591942ms: waiting for machine to come up
	I1028 18:29:02.875813   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:02.876360   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:02.876397   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:02.876325   68528 retry.go:31] will retry after 529.444037ms: waiting for machine to come up
	I1028 18:28:58.895888   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:28:58.895918   66801 pod_ready.go:82] duration metric: took 1.005990705s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:28:58.895932   66801 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:00.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:02.903390   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:01.957748   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetIP
	I1028 18:29:01.960967   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961355   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:29:01.961382   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:29:01.961635   67489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:01.966300   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:01.979786   67489 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:01.979899   67489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:01.979957   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:02.020659   67489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:02.020716   67489 ssh_runner.go:195] Run: which lz4
	I1028 18:29:02.024772   67489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:02.030183   67489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:02.030206   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:03.449423   67489 crio.go:462] duration metric: took 1.424673911s to copy over tarball
	I1028 18:29:03.449498   67489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:01.447557   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:01.947946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.448522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:02.947533   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.447522   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.948025   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.448136   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:04.948157   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.447635   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:05.947987   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:03.407835   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:03.408366   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:03.408390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:03.408265   68528 retry.go:31] will retry after 680.005296ms: waiting for machine to come up
	I1028 18:29:04.089802   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.090390   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.090409   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.090338   68528 retry.go:31] will retry after 833.681725ms: waiting for machine to come up
	I1028 18:29:04.925788   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:04.926278   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:04.926298   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:04.926227   68528 retry.go:31] will retry after 1.050194845s: waiting for machine to come up
	I1028 18:29:05.978270   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:05.978715   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:05.978742   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:05.978669   68528 retry.go:31] will retry after 1.416773018s: waiting for machine to come up
	I1028 18:29:07.397367   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:07.397843   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:07.397876   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:07.397787   68528 retry.go:31] will retry after 1.621623459s: waiting for machine to come up
	I1028 18:29:04.903465   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:06.903931   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:05.622217   67489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172685001s)
	I1028 18:29:05.622253   67489 crio.go:469] duration metric: took 2.172801769s to extract the tarball
	I1028 18:29:05.622264   67489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:05.660585   67489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:05.705484   67489 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:05.705510   67489 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:05.705520   67489 kubeadm.go:934] updating node { 192.168.39.215 8444 v1.31.2 crio true true} ...
	I1028 18:29:05.705634   67489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-692033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:05.705725   67489 ssh_runner.go:195] Run: crio config
	I1028 18:29:05.760618   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:05.760649   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:05.760661   67489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:05.760690   67489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-692033 NodeName:default-k8s-diff-port-692033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:05.760858   67489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-692033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:05.760936   67489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:05.771392   67489 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:05.771464   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:05.780926   67489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1028 18:29:05.797951   67489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:05.814159   67489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1028 18:29:05.830723   67489 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:05.835163   67489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:05.847192   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:05.972201   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:05.990475   67489 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033 for IP: 192.168.39.215
	I1028 18:29:05.990492   67489 certs.go:194] generating shared ca certs ...
	I1028 18:29:05.990511   67489 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:05.990711   67489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:05.990764   67489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:05.990776   67489 certs.go:256] generating profile certs ...
	I1028 18:29:05.990875   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.key
	I1028 18:29:05.990991   67489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key.81b9981a
	I1028 18:29:05.991052   67489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key
	I1028 18:29:05.991218   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:05.991268   67489 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:05.991283   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:05.991317   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:05.991359   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:05.991405   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:05.991481   67489 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:05.992294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:06.033938   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:06.070407   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:06.115934   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:06.144600   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 18:29:06.169202   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:06.196294   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:06.219384   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 18:29:06.242169   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:06.266506   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:06.290175   67489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:06.313006   67489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:06.329076   67489 ssh_runner.go:195] Run: openssl version
	I1028 18:29:06.335322   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:06.346021   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350401   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.350464   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:06.356134   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:06.366765   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:06.377486   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381920   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.381978   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:06.387492   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:06.398392   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:06.413238   67489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418376   67489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.418429   67489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:06.423997   67489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:06.436170   67489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:06.440853   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:06.446851   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:06.452980   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:06.458973   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:06.465088   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:06.470776   67489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:06.476462   67489 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-692033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-692033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:06.476588   67489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:06.476638   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.519820   67489 cri.go:89] found id: ""
	I1028 18:29:06.519884   67489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:06.530091   67489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:06.530110   67489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:06.530171   67489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:06.539807   67489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:06.540946   67489 kubeconfig.go:125] found "default-k8s-diff-port-692033" server: "https://192.168.39.215:8444"
	I1028 18:29:06.543088   67489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:06.552354   67489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I1028 18:29:06.552379   67489 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:06.552389   67489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:06.552445   67489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:06.586545   67489 cri.go:89] found id: ""
	I1028 18:29:06.586611   67489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:06.603418   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:06.612856   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:06.612876   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:06.612921   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:29:06.621852   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:06.621900   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:06.631132   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:29:06.640088   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:06.640158   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:06.651007   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.660034   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:06.660104   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:06.669587   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:29:06.678863   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:06.678937   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:06.688820   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:06.698470   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:06.820432   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.030810   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210339958s)
	I1028 18:29:08.030839   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.255000   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.321500   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:08.412775   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:08.412854   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.913648   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.413011   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.459009   67489 api_server.go:72] duration metric: took 1.046232596s to wait for apiserver process to appear ...
	I1028 18:29:09.459041   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:09.459062   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:09.459626   67489 api_server.go:269] stopped: https://192.168.39.215:8444/healthz: Get "https://192.168.39.215:8444/healthz": dial tcp 192.168.39.215:8444: connect: connection refused
	I1028 18:29:09.960128   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:06.447581   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:06.947550   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.447977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:07.947491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.447960   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:08.947662   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.448201   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.947753   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.448116   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:10.948175   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:09.020419   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:09.020867   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:09.020899   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:09.020814   68528 retry.go:31] will retry after 2.2230034s: waiting for machine to come up
	I1028 18:29:11.245136   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:11.245630   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:11.245657   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:11.245595   68528 retry.go:31] will retry after 2.153898764s: waiting for machine to come up
	I1028 18:29:09.403596   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:11.903702   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:12.135346   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.135381   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.135394   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.166207   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:12.166234   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:12.459631   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.473153   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.473183   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:12.959778   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:12.969281   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:12.969320   67489 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:13.459913   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:29:13.464362   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:29:13.471925   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:13.471953   67489 api_server.go:131] duration metric: took 4.012904227s to wait for apiserver health ...
	I1028 18:29:13.471964   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:29:13.471971   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:13.473908   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:13.475283   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:13.487393   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:13.532627   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:13.544945   67489 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:13.544982   67489 system_pods.go:61] "coredns-7c65d6cfc9-ctx9z" [7067f349-3a22-468d-bd9d-19d057eb43f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:13.544993   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [313161ff-f30f-4e25-978d-9aa2eba7fc44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:13.545004   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [e9a66e8e-946b-4365-bd63-3adfdd75e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:13.545014   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [0e682f68-2f9a-4bf3-bbe4-3a6b1ef6778d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:13.545021   67489 system_pods.go:61] "kube-proxy-86rll" [d34f46c6-3227-40c9-ac97-066b98bfce32] Running
	I1028 18:29:13.545029   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [b9058969-31e2-4249-862f-ef5de7784adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:13.545043   67489 system_pods.go:61] "metrics-server-6867b74b74-dz4nl" [833c650e-5f5d-46a1-9ae1-64619c53a92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:13.545047   67489 system_pods.go:61] "storage-provisioner" [342db8fa-7873-47b0-a5a6-52cde2e19d47] Running
	I1028 18:29:13.545053   67489 system_pods.go:74] duration metric: took 12.403166ms to wait for pod list to return data ...
	I1028 18:29:13.545060   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:13.548591   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:13.548619   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:13.548632   67489 node_conditions.go:105] duration metric: took 3.567222ms to run NodePressure ...
	I1028 18:29:13.548649   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:13.818718   67489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826139   67489 kubeadm.go:739] kubelet initialised
	I1028 18:29:13.826161   67489 kubeadm.go:740] duration metric: took 7.415257ms waiting for restarted kubelet to initialise ...
	I1028 18:29:13.826170   67489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:13.833418   67489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.838793   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838820   67489 pod_ready.go:82] duration metric: took 5.377698ms for pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.838831   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "coredns-7c65d6cfc9-ctx9z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.838840   67489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.843172   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843195   67489 pod_ready.go:82] duration metric: took 4.34633ms for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.843203   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.843209   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:13.847581   67489 pod_ready.go:98] node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847615   67489 pod_ready.go:82] duration metric: took 4.389898ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	E1028 18:29:13.847630   67489 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-692033" hosting pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-692033" has status "Ready":"False"
	I1028 18:29:13.847642   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:11.448521   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:11.947592   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.448427   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:12.948413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.448390   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.948518   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.447929   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:14.948106   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.448429   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:15.948236   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:13.401547   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:13.402054   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:13.402083   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:13.402028   68528 retry.go:31] will retry after 2.345507901s: waiting for machine to come up
	I1028 18:29:15.749122   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:15.749485   66600 main.go:141] libmachine: (embed-certs-021370) DBG | unable to find current IP address of domain embed-certs-021370 in network mk-embed-certs-021370
	I1028 18:29:15.749502   66600 main.go:141] libmachine: (embed-certs-021370) DBG | I1028 18:29:15.749451   68528 retry.go:31] will retry after 2.974576274s: waiting for machine to come up
	I1028 18:29:13.903930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.403934   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:15.858338   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:18.354245   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:16.447535   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:16.948117   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.448197   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:17.948491   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.948393   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.448406   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:19.947788   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.448100   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:20.947907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:18.727508   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.727990   66600 main.go:141] libmachine: (embed-certs-021370) Found IP for machine: 192.168.50.62
	I1028 18:29:18.728011   66600 main.go:141] libmachine: (embed-certs-021370) Reserving static IP address...
	I1028 18:29:18.728028   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has current primary IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.728447   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.728478   66600 main.go:141] libmachine: (embed-certs-021370) Reserved static IP address: 192.168.50.62
	I1028 18:29:18.728497   66600 main.go:141] libmachine: (embed-certs-021370) DBG | skip adding static IP to network mk-embed-certs-021370 - found existing host DHCP lease matching {name: "embed-certs-021370", mac: "52:54:00:2e:5a:fa", ip: "192.168.50.62"}
	I1028 18:29:18.728510   66600 main.go:141] libmachine: (embed-certs-021370) Waiting for SSH to be available...
	I1028 18:29:18.728520   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Getting to WaitForSSH function...
	I1028 18:29:18.730574   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731031   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.731069   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.731227   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH client type: external
	I1028 18:29:18.731248   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa (-rw-------)
	I1028 18:29:18.731282   66600 main.go:141] libmachine: (embed-certs-021370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 18:29:18.731310   66600 main.go:141] libmachine: (embed-certs-021370) DBG | About to run SSH command:
	I1028 18:29:18.731327   66600 main.go:141] libmachine: (embed-certs-021370) DBG | exit 0
	I1028 18:29:18.860213   66600 main.go:141] libmachine: (embed-certs-021370) DBG | SSH cmd err, output: <nil>: 
	I1028 18:29:18.860619   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetConfigRaw
	I1028 18:29:18.861235   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:18.863576   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.863932   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.863956   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.864224   66600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/config.json ...
	I1028 18:29:18.864465   66600 machine.go:93] provisionDockerMachine start ...
	I1028 18:29:18.864521   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:18.864720   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.866951   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867314   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.867349   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.867511   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.867665   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867811   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.867941   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.868072   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.868230   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.868239   66600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 18:29:18.972695   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 18:29:18.972729   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.972970   66600 buildroot.go:166] provisioning hostname "embed-certs-021370"
	I1028 18:29:18.973000   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:18.973209   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:18.975608   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.975889   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:18.975915   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:18.976082   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:18.976269   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976401   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:18.976505   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:18.976625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:18.976796   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:18.976809   66600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-021370 && echo "embed-certs-021370" | sudo tee /etc/hostname
	I1028 18:29:19.094622   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-021370
	
	I1028 18:29:19.094655   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.097110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097436   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.097460   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.097639   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.097817   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.097967   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.098121   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.098309   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.098517   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.098533   66600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-021370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-021370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-021370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 18:29:19.218088   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 18:29:19.218112   66600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19872-13443/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-13443/.minikube}
	I1028 18:29:19.218140   66600 buildroot.go:174] setting up certificates
	I1028 18:29:19.218150   66600 provision.go:84] configureAuth start
	I1028 18:29:19.218159   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetMachineName
	I1028 18:29:19.218411   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:19.221093   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221441   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.221469   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.221641   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.223628   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.223908   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.223928   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.224085   66600 provision.go:143] copyHostCerts
	I1028 18:29:19.224155   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem, removing ...
	I1028 18:29:19.224185   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem
	I1028 18:29:19.224252   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/ca.pem (1082 bytes)
	I1028 18:29:19.224380   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem, removing ...
	I1028 18:29:19.224390   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem
	I1028 18:29:19.224422   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/cert.pem (1123 bytes)
	I1028 18:29:19.224532   66600 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem, removing ...
	I1028 18:29:19.224542   66600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem
	I1028 18:29:19.224570   66600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-13443/.minikube/key.pem (1679 bytes)
	I1028 18:29:19.224655   66600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-021370 san=[127.0.0.1 192.168.50.62 embed-certs-021370 localhost minikube]
	I1028 18:29:19.402860   66600 provision.go:177] copyRemoteCerts
	I1028 18:29:19.402925   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 18:29:19.402954   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.405556   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.405904   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.405939   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.406100   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.406265   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.406391   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.406494   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.486543   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 18:29:19.510790   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 18:29:19.534037   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 18:29:19.557509   66600 provision.go:87] duration metric: took 339.349044ms to configureAuth
	I1028 18:29:19.557531   66600 buildroot.go:189] setting minikube options for container-runtime
	I1028 18:29:19.557681   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:29:19.557745   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.560240   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560594   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.560623   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.560757   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.560931   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561110   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.561320   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.561490   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.561651   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.561664   66600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 18:29:19.781270   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 18:29:19.781304   66600 machine.go:96] duration metric: took 916.814114ms to provisionDockerMachine
	I1028 18:29:19.781317   66600 start.go:293] postStartSetup for "embed-certs-021370" (driver="kvm2")
	I1028 18:29:19.781327   66600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 18:29:19.781345   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:19.781664   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 18:29:19.781690   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.784176   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784509   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.784538   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.784667   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.784854   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.785028   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.785171   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:19.867396   66600 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 18:29:19.871516   66600 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 18:29:19.871542   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/addons for local assets ...
	I1028 18:29:19.871630   66600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-13443/.minikube/files for local assets ...
	I1028 18:29:19.871717   66600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem -> 206802.pem in /etc/ssl/certs
	I1028 18:29:19.871799   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 18:29:19.882017   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:19.906531   66600 start.go:296] duration metric: took 125.203636ms for postStartSetup
	I1028 18:29:19.906562   66600 fix.go:56] duration metric: took 19.381205641s for fixHost
	I1028 18:29:19.906581   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:19.909285   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909610   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:19.909640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:19.909778   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:19.909980   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910311   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:19.910444   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:19.910625   66600 main.go:141] libmachine: Using SSH client type: native
	I1028 18:29:19.910788   66600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I1028 18:29:19.910803   66600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 18:29:20.017311   66600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730140159.989127147
	
	I1028 18:29:20.017339   66600 fix.go:216] guest clock: 1730140159.989127147
	I1028 18:29:20.017346   66600 fix.go:229] Guest: 2024-10-28 18:29:19.989127147 +0000 UTC Remote: 2024-10-28 18:29:19.906566181 +0000 UTC m=+356.890524496 (delta=82.560966ms)
	I1028 18:29:20.017368   66600 fix.go:200] guest clock delta is within tolerance: 82.560966ms
	I1028 18:29:20.017374   66600 start.go:83] releasing machines lock for "embed-certs-021370", held for 19.492049852s
	I1028 18:29:20.017396   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.017657   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:20.020286   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020680   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.020704   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.020816   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021307   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021491   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:29:20.021577   66600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 18:29:20.021616   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.021746   66600 ssh_runner.go:195] Run: cat /version.json
	I1028 18:29:20.021767   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:29:20.024157   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024429   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024511   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024533   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.024679   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.024856   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.024880   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:20.024896   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:20.025019   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025070   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:29:20.025160   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.025201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:29:20.025304   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:29:20.025443   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:29:20.101316   66600 ssh_runner.go:195] Run: systemctl --version
	I1028 18:29:20.124859   66600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 18:29:20.268773   66600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 18:29:20.275277   66600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 18:29:20.275358   66600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 18:29:20.291972   66600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 18:29:20.291999   66600 start.go:495] detecting cgroup driver to use...
	I1028 18:29:20.292066   66600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 18:29:20.311389   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 18:29:20.325385   66600 docker.go:217] disabling cri-docker service (if available) ...
	I1028 18:29:20.325434   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 18:29:20.339246   66600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 18:29:20.353759   66600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 18:29:20.477639   66600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 18:29:20.622752   66600 docker.go:233] disabling docker service ...
	I1028 18:29:20.622825   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 18:29:20.637258   66600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 18:29:20.650210   66600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 18:29:20.801036   66600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 18:29:20.945078   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 18:29:20.959494   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 18:29:20.977797   66600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 18:29:20.977854   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.987991   66600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 18:29:20.988038   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:20.998188   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.008502   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.018540   66600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 18:29:21.028663   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.038758   66600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.056298   66600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 18:29:21.067136   66600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 18:29:21.076859   66600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 18:29:21.076906   66600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 18:29:21.090468   66600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 18:29:21.099951   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:21.226675   66600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 18:29:21.321993   66600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 18:29:21.322074   66600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 18:29:21.327981   66600 start.go:563] Will wait 60s for crictl version
	I1028 18:29:21.328028   66600 ssh_runner.go:195] Run: which crictl
	I1028 18:29:21.331673   66600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 18:29:21.369066   66600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 18:29:21.369168   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.396873   66600 ssh_runner.go:195] Run: crio --version
	I1028 18:29:21.426233   66600 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 18:29:21.427570   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetIP
	I1028 18:29:21.430207   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430560   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:29:21.430582   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:29:21.430732   66600 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 18:29:21.435293   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:21.447885   66600 kubeadm.go:883] updating cluster {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 18:29:21.447989   66600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 18:29:21.448067   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:21.488401   66600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 18:29:21.488488   66600 ssh_runner.go:195] Run: which lz4
	I1028 18:29:21.492578   66600 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 18:29:21.496531   66600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 18:29:21.496560   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 18:29:22.824198   66600 crio.go:462] duration metric: took 1.331643546s to copy over tarball
	I1028 18:29:22.824276   66600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 18:29:18.902233   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.902721   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.904121   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:20.354850   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:22.355961   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:24.854445   67489 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:21.447903   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:21.948305   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.448529   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:22.947708   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.447881   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:23.947572   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.448433   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.948299   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.447748   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.947863   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:24.906928   66600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082617931s)
	I1028 18:29:24.906959   66600 crio.go:469] duration metric: took 2.082732511s to extract the tarball
	I1028 18:29:24.906968   66600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 18:29:24.944094   66600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 18:29:24.991024   66600 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 18:29:24.991048   66600 cache_images.go:84] Images are preloaded, skipping loading
	I1028 18:29:24.991057   66600 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.31.2 crio true true} ...
	I1028 18:29:24.991175   66600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-021370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 18:29:24.991262   66600 ssh_runner.go:195] Run: crio config
	I1028 18:29:25.034609   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:25.034629   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:25.034639   66600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 18:29:25.034657   66600 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-021370 NodeName:embed-certs-021370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 18:29:25.034803   66600 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-021370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 18:29:25.034858   66600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 18:29:25.044587   66600 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 18:29:25.044661   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 18:29:25.054150   66600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 18:29:25.070100   66600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 18:29:25.085866   66600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1028 18:29:25.101932   66600 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I1028 18:29:25.105817   66600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 18:29:25.117399   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:29:25.235698   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:29:25.251517   66600 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370 for IP: 192.168.50.62
	I1028 18:29:25.251536   66600 certs.go:194] generating shared ca certs ...
	I1028 18:29:25.251549   66600 certs.go:226] acquiring lock for ca certs: {Name:mk59792b405ec68f6539fd611960288b38a217fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:29:25.251701   66600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key
	I1028 18:29:25.251758   66600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key
	I1028 18:29:25.251771   66600 certs.go:256] generating profile certs ...
	I1028 18:29:25.251871   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/client.key
	I1028 18:29:25.251951   66600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key.1a2ee1e7
	I1028 18:29:25.252010   66600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key
	I1028 18:29:25.252184   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem (1338 bytes)
	W1028 18:29:25.252213   66600 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680_empty.pem, impossibly tiny 0 bytes
	I1028 18:29:25.252222   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 18:29:25.252246   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/ca.pem (1082 bytes)
	I1028 18:29:25.252271   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/cert.pem (1123 bytes)
	I1028 18:29:25.252291   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/certs/key.pem (1679 bytes)
	I1028 18:29:25.252328   66600 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem (1708 bytes)
	I1028 18:29:25.252968   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 18:29:25.280370   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 18:29:25.323757   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 18:29:25.356813   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 18:29:25.395729   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 18:29:25.428768   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 18:29:25.459929   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 18:29:25.485206   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/embed-certs-021370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 18:29:25.514312   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/ssl/certs/206802.pem --> /usr/share/ca-certificates/206802.pem (1708 bytes)
	I1028 18:29:25.537007   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 18:29:25.559926   66600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-13443/.minikube/certs/20680.pem --> /usr/share/ca-certificates/20680.pem (1338 bytes)
	I1028 18:29:25.582419   66600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 18:29:25.599284   66600 ssh_runner.go:195] Run: openssl version
	I1028 18:29:25.605132   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/206802.pem && ln -fs /usr/share/ca-certificates/206802.pem /etc/ssl/certs/206802.pem"
	I1028 18:29:25.615576   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619856   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:20 /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.619911   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/206802.pem
	I1028 18:29:25.625516   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/206802.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 18:29:25.636185   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 18:29:25.646664   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650958   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:07 /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.650998   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 18:29:25.657176   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 18:29:25.668490   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20680.pem && ln -fs /usr/share/ca-certificates/20680.pem /etc/ssl/certs/20680.pem"
	I1028 18:29:25.679608   66600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.683993   66600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:20 /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.684041   66600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20680.pem
	I1028 18:29:25.689729   66600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20680.pem /etc/ssl/certs/51391683.0"
	I1028 18:29:25.700817   66600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 18:29:25.705214   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 18:29:25.711351   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 18:29:25.717172   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 18:29:25.722879   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 18:29:25.728415   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 18:29:25.733859   66600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 18:29:25.739422   66600 kubeadm.go:392] StartCluster: {Name:embed-certs-021370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-021370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 18:29:25.739492   66600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 18:29:25.739534   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.779869   66600 cri.go:89] found id: ""
	I1028 18:29:25.779926   66600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 18:29:25.790753   66600 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 18:29:25.790771   66600 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 18:29:25.790811   66600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 18:29:25.800588   66600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 18:29:25.801624   66600 kubeconfig.go:125] found "embed-certs-021370" server: "https://192.168.50.62:8443"
	I1028 18:29:25.803466   66600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 18:29:25.813212   66600 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I1028 18:29:25.813240   66600 kubeadm.go:1160] stopping kube-system containers ...
	I1028 18:29:25.813254   66600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 18:29:25.813312   66600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 18:29:25.848911   66600 cri.go:89] found id: ""
	I1028 18:29:25.848976   66600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 18:29:25.866165   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:29:25.876454   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:29:25.876485   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:29:25.876539   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:29:25.886746   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:29:25.886802   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:29:25.897486   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:29:25.907828   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:29:25.907881   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:29:25.917520   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.926896   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:29:25.926950   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:29:25.937184   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:29:25.946539   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:29:25.946585   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:29:25.956520   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:29:25.968541   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:26.077716   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.298743   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.220990469s)
	I1028 18:29:27.298777   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.517286   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.582890   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:27.648091   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:29:27.648159   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:25.402969   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:27.405049   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.356621   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.356642   67489 pod_ready.go:82] duration metric: took 12.508989427s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.356653   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361609   67489 pod_ready.go:93] pod "kube-proxy-86rll" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.361627   67489 pod_ready.go:82] duration metric: took 4.968039ms for pod "kube-proxy-86rll" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.361635   67489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365430   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:26.365449   67489 pod_ready.go:82] duration metric: took 3.807327ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:26.365460   67489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:28.373442   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:26.448386   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:26.948082   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.447496   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:27.948285   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.448205   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.947683   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.447813   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.947810   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.448413   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:30.947477   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.148668   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:28.648320   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.148392   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.648218   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:29.682858   66600 api_server.go:72] duration metric: took 2.034774456s to wait for apiserver process to appear ...
	I1028 18:29:29.682888   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:29:29.682915   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:29.683457   66600 api_server.go:269] stopped: https://192.168.50.62:8443/healthz: Get "https://192.168.50.62:8443/healthz": dial tcp 192.168.50.62:8443: connect: connection refused
	I1028 18:29:30.182997   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.878280   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.878304   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:32.878318   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:32.942789   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 18:29:32.942828   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 18:29:29.903158   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:32.404024   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.183344   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.187337   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.187362   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:33.683288   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:33.687653   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 18:29:33.687680   66600 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 18:29:34.183190   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:29:34.187671   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:29:34.195909   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:29:34.195938   66600 api_server.go:131] duration metric: took 4.51303648s to wait for apiserver health ...
	I1028 18:29:34.195950   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:29:34.195959   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:29:34.197469   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:29:30.872450   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:33.372710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:31.448099   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:31.948269   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.447660   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:32.947559   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.447716   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:33.948569   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.447555   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.947612   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.448411   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:35.947786   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:34.198803   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:29:34.221645   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:29:34.250694   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:29:34.261167   66600 system_pods.go:59] 8 kube-system pods found
	I1028 18:29:34.261211   66600 system_pods.go:61] "coredns-7c65d6cfc9-bdtd8" [e1fff57c-ba57-4592-9049-7cc80a6f67a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 18:29:34.261229   66600 system_pods.go:61] "etcd-embed-certs-021370" [0c805e30-b6d8-416c-97af-c33b142b46e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 18:29:34.261240   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [244e08f7-7e8c-4547-b145-9816374fe582] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 18:29:34.261251   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [c08dc68e-d441-4d96-8377-957c381c4ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 18:29:34.261265   66600 system_pods.go:61] "kube-proxy-7g7lr" [828a4297-7703-46a7-bffe-c8daf83ef4bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 18:29:34.261277   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [2bc3fea6-0f01-43e9-b69e-deb26980e658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 18:29:34.261286   66600 system_pods.go:61] "metrics-server-6867b74b74-gg8bl" [599d8cf3-717d-46b2-a5ba-43e00f46829b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:29:34.261296   66600 system_pods.go:61] "storage-provisioner" [ad047e20-2de9-447c-83bc-8b835292a25f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 18:29:34.261307   66600 system_pods.go:74] duration metric: took 10.589505ms to wait for pod list to return data ...
	I1028 18:29:34.261319   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:29:34.265041   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:29:34.265066   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:29:34.265079   66600 node_conditions.go:105] duration metric: took 3.75485ms to run NodePressure ...
	I1028 18:29:34.265098   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 18:29:34.567509   66600 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571573   66600 kubeadm.go:739] kubelet initialised
	I1028 18:29:34.571592   66600 kubeadm.go:740] duration metric: took 4.056877ms waiting for restarted kubelet to initialise ...
	I1028 18:29:34.571599   66600 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:29:34.576872   66600 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:36.586357   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:34.901383   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.902526   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:35.871154   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:37.873138   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:36.447566   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:36.947886   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.448276   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:37.948547   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.447546   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:38.947974   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.448334   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.948183   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.448396   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:40.947620   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:39.083269   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.083414   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:41.083443   66600 pod_ready.go:82] duration metric: took 6.506548177s for pod "coredns-7c65d6cfc9-bdtd8" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:41.083453   66600 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:39.401480   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.402426   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:40.370529   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:42.371580   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:44.372259   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:41.448306   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:41.947486   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.448219   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:42.948295   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.447765   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.947468   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.448454   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:44.947488   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.447568   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:45.948070   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:43.089927   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.589484   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.594775   66600 pod_ready.go:103] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:43.403246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:45.403595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:47.902160   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.872441   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.371650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:46.448123   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:46.948178   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.447989   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:47.947888   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.448230   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.947692   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.448090   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:49.947996   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.447949   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:50.947977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:48.089584   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.089607   66600 pod_ready.go:82] duration metric: took 7.006147079s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.089619   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093940   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.093959   66600 pod_ready.go:82] duration metric: took 4.332474ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.093969   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098279   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.098295   66600 pod_ready.go:82] duration metric: took 4.319206ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.098304   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102326   66600 pod_ready.go:93] pod "kube-proxy-7g7lr" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.102341   66600 pod_ready.go:82] duration metric: took 4.03162ms for pod "kube-proxy-7g7lr" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.102349   66600 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106249   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:29:48.106265   66600 pod_ready.go:82] duration metric: took 3.910208ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:48.106279   66600 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	I1028 18:29:50.112678   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:52.113794   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:49.902296   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.902424   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.371741   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:53.371833   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:51.448130   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:51.948450   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:51.948545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:51.987428   67149 cri.go:89] found id: ""
	I1028 18:29:51.987459   67149 logs.go:282] 0 containers: []
	W1028 18:29:51.987470   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:51.987478   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:51.987534   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:52.021429   67149 cri.go:89] found id: ""
	I1028 18:29:52.021452   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.021460   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:52.021466   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:52.021509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:52.055338   67149 cri.go:89] found id: ""
	I1028 18:29:52.055362   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.055373   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:52.055380   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:52.055432   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:52.088673   67149 cri.go:89] found id: ""
	I1028 18:29:52.088697   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.088705   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:52.088711   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:52.088766   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:52.129833   67149 cri.go:89] found id: ""
	I1028 18:29:52.129854   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.129862   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:52.129867   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:52.129918   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:52.162994   67149 cri.go:89] found id: ""
	I1028 18:29:52.163029   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.163040   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:52.163047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:52.163105   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:52.196819   67149 cri.go:89] found id: ""
	I1028 18:29:52.196840   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.196848   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:52.196853   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:52.196906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:52.232924   67149 cri.go:89] found id: ""
	I1028 18:29:52.232955   67149 logs.go:282] 0 containers: []
	W1028 18:29:52.232965   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:52.232977   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:52.232992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:52.283317   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:52.283353   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:52.296648   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:52.296673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:52.423396   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:52.423418   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:52.423429   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:52.497671   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:52.497704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:55.037920   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:55.052539   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:55.052602   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:55.089302   67149 cri.go:89] found id: ""
	I1028 18:29:55.089332   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.089343   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:55.089351   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:55.089404   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:55.127317   67149 cri.go:89] found id: ""
	I1028 18:29:55.127345   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.127352   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:55.127358   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:55.127413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:55.161689   67149 cri.go:89] found id: ""
	I1028 18:29:55.161714   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.161721   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:55.161727   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:55.161772   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:55.196494   67149 cri.go:89] found id: ""
	I1028 18:29:55.196521   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.196534   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:55.196542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:55.196596   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:55.234980   67149 cri.go:89] found id: ""
	I1028 18:29:55.235008   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.235020   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:55.235028   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:55.235086   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:55.274750   67149 cri.go:89] found id: ""
	I1028 18:29:55.274775   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.274783   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:55.274789   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:55.274842   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:55.309839   67149 cri.go:89] found id: ""
	I1028 18:29:55.309865   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.309874   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:55.309881   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:55.309943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:55.358765   67149 cri.go:89] found id: ""
	I1028 18:29:55.358793   67149 logs.go:282] 0 containers: []
	W1028 18:29:55.358805   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:55.358816   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:55.358830   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:55.422821   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:55.422869   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:55.439458   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:55.439482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:55.507743   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:55.507764   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:55.507775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:55.582679   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:55.582710   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:54.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.612967   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:54.402722   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:56.902816   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:55.372539   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:57.871444   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:58.124907   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:29:58.139125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:29:58.139181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:29:58.178829   67149 cri.go:89] found id: ""
	I1028 18:29:58.178853   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.178864   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:29:58.178871   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:29:58.178933   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:29:58.212290   67149 cri.go:89] found id: ""
	I1028 18:29:58.212320   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.212336   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:29:58.212344   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:29:58.212402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:29:58.246108   67149 cri.go:89] found id: ""
	I1028 18:29:58.246135   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.246145   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:29:58.246152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:29:58.246212   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:29:58.280625   67149 cri.go:89] found id: ""
	I1028 18:29:58.280651   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.280662   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:29:58.280670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:29:58.280727   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:29:58.318755   67149 cri.go:89] found id: ""
	I1028 18:29:58.318783   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.318793   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:29:58.318801   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:29:58.318853   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:29:58.356452   67149 cri.go:89] found id: ""
	I1028 18:29:58.356487   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.356499   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:29:58.356506   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:29:58.356564   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:29:58.389906   67149 cri.go:89] found id: ""
	I1028 18:29:58.389928   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.389936   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:29:58.389943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:29:58.390001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:29:58.425883   67149 cri.go:89] found id: ""
	I1028 18:29:58.425911   67149 logs.go:282] 0 containers: []
	W1028 18:29:58.425920   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:29:58.425929   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:29:58.425943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:29:58.484392   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:29:58.484433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:29:58.498133   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:29:58.498159   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:29:58.572358   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:29:58.572382   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:29:58.572397   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:29:58.654963   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:29:58.654997   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:29:58.613408   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.614235   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:29:59.402355   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.403000   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:00.370479   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:02.370951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:04.372159   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:01.196593   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:01.209622   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:01.209693   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:01.243682   67149 cri.go:89] found id: ""
	I1028 18:30:01.243708   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.243718   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:01.243726   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:01.243786   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:01.277617   67149 cri.go:89] found id: ""
	I1028 18:30:01.277646   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.277654   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:01.277660   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:01.277710   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:01.314028   67149 cri.go:89] found id: ""
	I1028 18:30:01.314055   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.314067   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:01.314081   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:01.314152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:01.350324   67149 cri.go:89] found id: ""
	I1028 18:30:01.350348   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.350356   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:01.350362   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:01.350415   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:01.385802   67149 cri.go:89] found id: ""
	I1028 18:30:01.385826   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.385834   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:01.385840   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:01.385883   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:01.421507   67149 cri.go:89] found id: ""
	I1028 18:30:01.421534   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.421545   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:01.421553   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:01.421611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:01.457285   67149 cri.go:89] found id: ""
	I1028 18:30:01.457314   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.457326   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:01.457333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:01.457380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:01.490962   67149 cri.go:89] found id: ""
	I1028 18:30:01.490984   67149 logs.go:282] 0 containers: []
	W1028 18:30:01.490992   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:01.491000   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:01.491012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:01.559906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:01.559937   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:01.559962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:01.639455   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:01.639485   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:01.681968   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:01.681994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:01.736639   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:01.736672   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.251876   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:04.265639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:04.265711   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:04.300133   67149 cri.go:89] found id: ""
	I1028 18:30:04.300159   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.300167   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:04.300173   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:04.300228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:04.335723   67149 cri.go:89] found id: ""
	I1028 18:30:04.335749   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.335760   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:04.335767   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:04.335825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:04.373009   67149 cri.go:89] found id: ""
	I1028 18:30:04.373030   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.373040   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:04.373048   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:04.373113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:04.405969   67149 cri.go:89] found id: ""
	I1028 18:30:04.405993   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.406003   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:04.406011   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:04.406066   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:04.441067   67149 cri.go:89] found id: ""
	I1028 18:30:04.441095   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.441106   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:04.441112   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:04.441176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:04.475231   67149 cri.go:89] found id: ""
	I1028 18:30:04.475260   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.475270   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:04.475277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:04.475342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:04.512970   67149 cri.go:89] found id: ""
	I1028 18:30:04.512998   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.513009   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:04.513017   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:04.513078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:04.547857   67149 cri.go:89] found id: ""
	I1028 18:30:04.547880   67149 logs.go:282] 0 containers: []
	W1028 18:30:04.547890   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:04.547901   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:04.547913   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:04.598870   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:04.598900   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:04.612678   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:04.612705   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:04.686945   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:04.686967   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:04.686979   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:04.764943   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:04.764992   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:03.113309   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.113449   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.613568   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:03.902735   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:05.903116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:06.872012   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:09.371576   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:07.310905   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:07.323880   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:07.323946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:07.363597   67149 cri.go:89] found id: ""
	I1028 18:30:07.363626   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.363637   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:07.363645   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:07.363706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:07.401051   67149 cri.go:89] found id: ""
	I1028 18:30:07.401073   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.401082   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:07.401089   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:07.401147   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:07.439710   67149 cri.go:89] found id: ""
	I1028 18:30:07.439735   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.439743   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:07.439748   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:07.439796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:07.476627   67149 cri.go:89] found id: ""
	I1028 18:30:07.476653   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.476663   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:07.476670   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:07.476747   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:07.508770   67149 cri.go:89] found id: ""
	I1028 18:30:07.508796   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.508807   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:07.508814   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:07.508874   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:07.543467   67149 cri.go:89] found id: ""
	I1028 18:30:07.543496   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.543506   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:07.543514   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:07.543575   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:07.577181   67149 cri.go:89] found id: ""
	I1028 18:30:07.577204   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.577212   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:07.577217   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:07.577266   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:07.611862   67149 cri.go:89] found id: ""
	I1028 18:30:07.611886   67149 logs.go:282] 0 containers: []
	W1028 18:30:07.611896   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:07.611906   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:07.611924   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:07.699794   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:07.699833   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:07.747920   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:07.747948   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:07.797402   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:07.797434   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:07.811752   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:07.811778   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:07.881604   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.382191   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:10.394572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:10.394624   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:10.428941   67149 cri.go:89] found id: ""
	I1028 18:30:10.428973   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.428984   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:10.429004   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:10.429071   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:10.462526   67149 cri.go:89] found id: ""
	I1028 18:30:10.462558   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.462569   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:10.462578   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:10.462641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:10.498472   67149 cri.go:89] found id: ""
	I1028 18:30:10.498495   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.498503   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:10.498509   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:10.498557   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:10.535400   67149 cri.go:89] found id: ""
	I1028 18:30:10.535422   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.535430   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:10.535436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:10.535483   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:10.568961   67149 cri.go:89] found id: ""
	I1028 18:30:10.568981   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.568988   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:10.568994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:10.569041   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:10.601273   67149 cri.go:89] found id: ""
	I1028 18:30:10.601306   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.601318   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:10.601325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:10.601383   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:10.638093   67149 cri.go:89] found id: ""
	I1028 18:30:10.638124   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.638135   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:10.638141   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:10.638203   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:10.674624   67149 cri.go:89] found id: ""
	I1028 18:30:10.674654   67149 logs.go:282] 0 containers: []
	W1028 18:30:10.674665   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:10.674675   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:10.674688   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:10.714568   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:10.714602   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:10.764732   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:10.764765   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:10.778111   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:10.778139   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:10.854488   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:10.854516   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:10.854531   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:10.113469   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.614268   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:08.401958   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:10.402159   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:12.402379   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:11.872789   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.372947   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:13.438803   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:13.452322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:13.452397   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:13.487337   67149 cri.go:89] found id: ""
	I1028 18:30:13.487360   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.487369   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:13.487381   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:13.487488   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:13.521992   67149 cri.go:89] found id: ""
	I1028 18:30:13.522024   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.522034   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:13.522041   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:13.522099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:13.555315   67149 cri.go:89] found id: ""
	I1028 18:30:13.555347   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.555363   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:13.555371   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:13.555431   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:13.589401   67149 cri.go:89] found id: ""
	I1028 18:30:13.589425   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.589436   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:13.589445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:13.589493   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:13.629340   67149 cri.go:89] found id: ""
	I1028 18:30:13.629370   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.629385   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:13.629393   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:13.629454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:13.667307   67149 cri.go:89] found id: ""
	I1028 18:30:13.667337   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.667348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:13.667355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:13.667418   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:13.701457   67149 cri.go:89] found id: ""
	I1028 18:30:13.701513   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.701526   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:13.701536   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:13.701594   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:13.737989   67149 cri.go:89] found id: ""
	I1028 18:30:13.738023   67149 logs.go:282] 0 containers: []
	W1028 18:30:13.738033   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:13.738043   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:13.738056   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:13.791743   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:13.791777   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:13.805501   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:13.805529   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:13.882239   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:13.882262   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:13.882276   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:13.963480   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:13.963516   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:15.112587   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:17.113242   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:14.901879   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.902869   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.871650   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:18.872448   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:16.502799   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:16.516397   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:16.516456   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:16.551670   67149 cri.go:89] found id: ""
	I1028 18:30:16.551701   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.551712   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:16.551719   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:16.551771   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:16.584390   67149 cri.go:89] found id: ""
	I1028 18:30:16.584417   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.584428   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:16.584435   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:16.584510   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:16.620868   67149 cri.go:89] found id: ""
	I1028 18:30:16.620892   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.620899   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:16.620904   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:16.620949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:16.654189   67149 cri.go:89] found id: ""
	I1028 18:30:16.654216   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.654225   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:16.654231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:16.654284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:16.694526   67149 cri.go:89] found id: ""
	I1028 18:30:16.694557   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.694568   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:16.694575   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:16.694640   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:16.728857   67149 cri.go:89] found id: ""
	I1028 18:30:16.728884   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.728892   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:16.728898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:16.728948   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:16.763198   67149 cri.go:89] found id: ""
	I1028 18:30:16.763220   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.763227   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:16.763232   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:16.763282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:16.800120   67149 cri.go:89] found id: ""
	I1028 18:30:16.800142   67149 logs.go:282] 0 containers: []
	W1028 18:30:16.800149   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:16.800157   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:16.800167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:16.852710   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:16.852736   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:16.867365   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:16.867395   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:16.945605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:16.945627   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:16.945643   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:17.022838   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:17.022871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.563585   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:19.577612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:19.577683   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:19.615797   67149 cri.go:89] found id: ""
	I1028 18:30:19.615820   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.615829   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:19.615836   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:19.615882   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:19.654780   67149 cri.go:89] found id: ""
	I1028 18:30:19.654802   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.654810   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:19.654816   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:19.654873   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:19.693502   67149 cri.go:89] found id: ""
	I1028 18:30:19.693532   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.693542   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:19.693550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:19.693611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:19.731869   67149 cri.go:89] found id: ""
	I1028 18:30:19.731902   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.731910   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:19.731916   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:19.731974   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:19.765046   67149 cri.go:89] found id: ""
	I1028 18:30:19.765081   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.765092   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:19.765099   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:19.765158   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:19.798082   67149 cri.go:89] found id: ""
	I1028 18:30:19.798105   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.798113   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:19.798119   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:19.798172   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:19.832562   67149 cri.go:89] found id: ""
	I1028 18:30:19.832590   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.832601   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:19.832608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:19.832676   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:19.867213   67149 cri.go:89] found id: ""
	I1028 18:30:19.867240   67149 logs.go:282] 0 containers: []
	W1028 18:30:19.867251   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:19.867260   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:19.867277   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:19.942276   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:19.942304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:19.977642   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:19.977671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:20.027077   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:20.027109   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:20.040159   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:20.040181   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:20.113350   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:19.113850   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.613505   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:19.402671   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.902317   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:21.372438   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.872137   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:22.614379   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:22.628550   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:22.628607   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:22.662647   67149 cri.go:89] found id: ""
	I1028 18:30:22.662670   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.662677   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:22.662683   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:22.662732   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:22.696697   67149 cri.go:89] found id: ""
	I1028 18:30:22.696736   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.696747   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:22.696753   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:22.696815   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:22.730011   67149 cri.go:89] found id: ""
	I1028 18:30:22.730039   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.730049   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:22.730056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:22.730114   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:22.766604   67149 cri.go:89] found id: ""
	I1028 18:30:22.766629   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.766639   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:22.766647   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:22.766703   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:22.800581   67149 cri.go:89] found id: ""
	I1028 18:30:22.800608   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.800617   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:22.800625   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:22.800692   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:22.832742   67149 cri.go:89] found id: ""
	I1028 18:30:22.832767   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.832775   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:22.832780   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:22.832823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:22.865850   67149 cri.go:89] found id: ""
	I1028 18:30:22.865876   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.865885   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:22.865892   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:22.865949   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:22.904410   67149 cri.go:89] found id: ""
	I1028 18:30:22.904433   67149 logs.go:282] 0 containers: []
	W1028 18:30:22.904443   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:22.904454   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:22.904482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:22.959275   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:22.959310   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:22.972630   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:22.972652   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:23.043851   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:23.043873   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:23.043886   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:23.121657   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:23.121686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:25.662109   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:25.676366   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:25.676443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:25.715192   67149 cri.go:89] found id: ""
	I1028 18:30:25.715216   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.715224   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:25.715230   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:25.715283   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:25.754736   67149 cri.go:89] found id: ""
	I1028 18:30:25.754765   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.754773   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:25.754779   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:25.754823   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:25.794179   67149 cri.go:89] found id: ""
	I1028 18:30:25.794207   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.794216   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:25.794224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:25.794278   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:25.833206   67149 cri.go:89] found id: ""
	I1028 18:30:25.833238   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.833246   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:25.833252   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:25.833298   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:25.871628   67149 cri.go:89] found id: ""
	I1028 18:30:25.871659   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.871669   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:25.871677   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:25.871735   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:25.910900   67149 cri.go:89] found id: ""
	I1028 18:30:25.910924   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.910934   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:25.910942   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:25.911001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:25.943972   67149 cri.go:89] found id: ""
	I1028 18:30:25.943992   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.943999   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:25.944004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:25.944059   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:25.982521   67149 cri.go:89] found id: ""
	I1028 18:30:25.982544   67149 logs.go:282] 0 containers: []
	W1028 18:30:25.982551   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:25.982559   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:25.982569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:26.033003   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:26.033031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:26.046480   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:26.046503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 18:30:24.112244   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.113815   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:23.902652   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:26.402135   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:25.873075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.372129   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	W1028 18:30:26.117194   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:26.117213   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:26.117230   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:26.195399   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:26.195430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:28.737237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:28.751846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:28.751910   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:28.794259   67149 cri.go:89] found id: ""
	I1028 18:30:28.794290   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.794301   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:28.794308   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:28.794374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:28.827573   67149 cri.go:89] found id: ""
	I1028 18:30:28.827603   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.827611   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:28.827616   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:28.827671   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:28.860676   67149 cri.go:89] found id: ""
	I1028 18:30:28.860702   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.860713   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:28.860721   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:28.860780   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:28.897302   67149 cri.go:89] found id: ""
	I1028 18:30:28.897327   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.897343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:28.897351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:28.897410   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:28.933425   67149 cri.go:89] found id: ""
	I1028 18:30:28.933454   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.933464   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:28.933471   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:28.933535   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:28.966004   67149 cri.go:89] found id: ""
	I1028 18:30:28.966032   67149 logs.go:282] 0 containers: []
	W1028 18:30:28.966043   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:28.966051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:28.966107   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:29.002788   67149 cri.go:89] found id: ""
	I1028 18:30:29.002818   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.002829   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:29.002835   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:29.002894   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:29.033351   67149 cri.go:89] found id: ""
	I1028 18:30:29.033379   67149 logs.go:282] 0 containers: []
	W1028 18:30:29.033389   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:29.033400   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:29.033420   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:29.107997   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:29.108025   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:29.144727   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:29.144753   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:29.206487   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:29.206521   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:29.219722   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:29.219744   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:29.288254   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:28.612485   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.113113   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:28.902654   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.902960   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:30.871338   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.372081   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:31.789035   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:31.802587   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:31.802650   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:31.838372   67149 cri.go:89] found id: ""
	I1028 18:30:31.838401   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.838410   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:31.838416   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:31.838469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:31.877794   67149 cri.go:89] found id: ""
	I1028 18:30:31.877822   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.877833   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:31.877840   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:31.877896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:31.917442   67149 cri.go:89] found id: ""
	I1028 18:30:31.917472   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.917483   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:31.917490   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:31.917549   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:31.951900   67149 cri.go:89] found id: ""
	I1028 18:30:31.951931   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.951943   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:31.951951   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:31.952008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:31.988011   67149 cri.go:89] found id: ""
	I1028 18:30:31.988040   67149 logs.go:282] 0 containers: []
	W1028 18:30:31.988051   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:31.988058   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:31.988116   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:32.021042   67149 cri.go:89] found id: ""
	I1028 18:30:32.021063   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.021071   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:32.021077   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:32.021124   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:32.053748   67149 cri.go:89] found id: ""
	I1028 18:30:32.053770   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.053778   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:32.053783   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:32.053837   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:32.089725   67149 cri.go:89] found id: ""
	I1028 18:30:32.089756   67149 logs.go:282] 0 containers: []
	W1028 18:30:32.089766   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:32.089777   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:32.089790   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:32.140000   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:32.140031   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:32.154023   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:32.154046   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:32.231222   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:32.231242   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:32.231255   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:32.311354   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:32.311388   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:34.852507   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:34.867133   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:34.867198   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:34.901201   67149 cri.go:89] found id: ""
	I1028 18:30:34.901228   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.901238   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:34.901245   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:34.901300   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:34.962788   67149 cri.go:89] found id: ""
	I1028 18:30:34.962814   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.962824   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:34.962835   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:34.962896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:34.996879   67149 cri.go:89] found id: ""
	I1028 18:30:34.996906   67149 logs.go:282] 0 containers: []
	W1028 18:30:34.996917   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:34.996926   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:34.996986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:35.033516   67149 cri.go:89] found id: ""
	I1028 18:30:35.033541   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.033553   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:35.033560   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:35.033622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:35.066903   67149 cri.go:89] found id: ""
	I1028 18:30:35.066933   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.066945   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:35.066953   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:35.067010   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:35.099675   67149 cri.go:89] found id: ""
	I1028 18:30:35.099697   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.099704   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:35.099710   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:35.099755   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:35.133595   67149 cri.go:89] found id: ""
	I1028 18:30:35.133623   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.133633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:35.133641   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:35.133699   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:35.172236   67149 cri.go:89] found id: ""
	I1028 18:30:35.172262   67149 logs.go:282] 0 containers: []
	W1028 18:30:35.172272   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:35.172282   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:35.172296   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:35.224952   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:35.224981   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:35.238554   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:35.238578   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:35.318991   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:35.319024   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:35.319040   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:35.399763   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:35.399799   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:33.612446   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.613847   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:33.402375   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.402653   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.902346   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:35.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:38.372413   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:37.947847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:37.963147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:37.963210   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.001768   67149 cri.go:89] found id: ""
	I1028 18:30:38.001792   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.001802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:38.001809   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:38.001868   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:38.042877   67149 cri.go:89] found id: ""
	I1028 18:30:38.042905   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.042916   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:38.042924   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:38.042986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:38.078116   67149 cri.go:89] found id: ""
	I1028 18:30:38.078143   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.078154   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:38.078162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:38.078226   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:38.111082   67149 cri.go:89] found id: ""
	I1028 18:30:38.111108   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.111119   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:38.111127   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:38.111187   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:38.144863   67149 cri.go:89] found id: ""
	I1028 18:30:38.144889   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.144898   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:38.144906   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:38.144962   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:38.178671   67149 cri.go:89] found id: ""
	I1028 18:30:38.178701   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.178712   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:38.178719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:38.178774   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:38.218441   67149 cri.go:89] found id: ""
	I1028 18:30:38.218464   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.218472   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:38.218477   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:38.218528   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:38.252697   67149 cri.go:89] found id: ""
	I1028 18:30:38.252719   67149 logs.go:282] 0 containers: []
	W1028 18:30:38.252727   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:38.252736   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:38.252745   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:38.304813   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:38.304853   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:38.318437   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:38.318462   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:38.389959   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:38.389987   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:38.390002   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:38.471462   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:38.471495   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:41.013647   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:41.027167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:41.027233   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:38.113426   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.612536   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:39.903261   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.402381   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:40.871193   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:42.873502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:41.062559   67149 cri.go:89] found id: ""
	I1028 18:30:41.062590   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.062601   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:41.062609   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:41.062667   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:41.097732   67149 cri.go:89] found id: ""
	I1028 18:30:41.097758   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.097767   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:41.097773   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:41.097819   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:41.133067   67149 cri.go:89] found id: ""
	I1028 18:30:41.133089   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.133097   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:41.133102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:41.133150   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:41.168640   67149 cri.go:89] found id: ""
	I1028 18:30:41.168674   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.168684   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:41.168691   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:41.168754   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:41.206429   67149 cri.go:89] found id: ""
	I1028 18:30:41.206453   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.206463   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:41.206470   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:41.206527   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:41.248326   67149 cri.go:89] found id: ""
	I1028 18:30:41.248350   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.248360   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:41.248369   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:41.248429   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:41.283703   67149 cri.go:89] found id: ""
	I1028 18:30:41.283734   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.283746   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:41.283753   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:41.283810   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:41.327759   67149 cri.go:89] found id: ""
	I1028 18:30:41.327786   67149 logs.go:282] 0 containers: []
	W1028 18:30:41.327796   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:41.327807   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:41.327820   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:41.388563   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:41.388593   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:41.406411   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:41.406435   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:41.490605   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:41.490626   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:41.490637   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:41.569386   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:41.569433   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.109394   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:44.123047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:44.123113   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:44.156762   67149 cri.go:89] found id: ""
	I1028 18:30:44.156792   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.156802   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:44.156810   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:44.156867   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:44.192244   67149 cri.go:89] found id: ""
	I1028 18:30:44.192271   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.192282   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:44.192289   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:44.192357   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:44.224059   67149 cri.go:89] found id: ""
	I1028 18:30:44.224094   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.224101   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:44.224115   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:44.224168   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:44.258750   67149 cri.go:89] found id: ""
	I1028 18:30:44.258779   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.258789   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:44.258797   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:44.258854   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:44.295600   67149 cri.go:89] found id: ""
	I1028 18:30:44.295624   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.295632   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:44.295638   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:44.295684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:44.327278   67149 cri.go:89] found id: ""
	I1028 18:30:44.327302   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.327309   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:44.327315   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:44.327370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:44.360734   67149 cri.go:89] found id: ""
	I1028 18:30:44.360760   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.360768   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:44.360774   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:44.360822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:44.398198   67149 cri.go:89] found id: ""
	I1028 18:30:44.398224   67149 logs.go:282] 0 containers: []
	W1028 18:30:44.398234   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:44.398249   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:44.398261   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:44.476135   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:44.476167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:44.514073   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:44.514105   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:44.563001   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:44.563033   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:44.576882   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:44.576912   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:44.648532   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:43.112043   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.113135   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.113382   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:44.403147   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:46.902890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:45.370854   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.371758   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.373946   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:47.149133   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:47.165612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:47.165696   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:47.203960   67149 cri.go:89] found id: ""
	I1028 18:30:47.203987   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.203996   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:47.204002   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:47.204065   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:47.236731   67149 cri.go:89] found id: ""
	I1028 18:30:47.236757   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.236766   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:47.236774   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:47.236828   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:47.273779   67149 cri.go:89] found id: ""
	I1028 18:30:47.273808   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.273820   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:47.273826   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:47.273878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:47.309996   67149 cri.go:89] found id: ""
	I1028 18:30:47.310020   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.310028   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:47.310034   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:47.310108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:47.352904   67149 cri.go:89] found id: ""
	I1028 18:30:47.352925   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.352934   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:47.352939   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:47.352990   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:47.389641   67149 cri.go:89] found id: ""
	I1028 18:30:47.389660   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.389667   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:47.389672   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:47.389718   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:47.422591   67149 cri.go:89] found id: ""
	I1028 18:30:47.422622   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.422632   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:47.422639   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:47.422694   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:47.454849   67149 cri.go:89] found id: ""
	I1028 18:30:47.454876   67149 logs.go:282] 0 containers: []
	W1028 18:30:47.454886   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:47.454895   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:47.454916   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:47.506176   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:47.506203   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:47.519084   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:47.519108   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:47.585660   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:47.585681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:47.585696   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:47.664904   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:47.664939   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:50.203775   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:50.216923   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:50.216992   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:50.252506   67149 cri.go:89] found id: ""
	I1028 18:30:50.252531   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.252541   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:50.252548   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:50.252608   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:50.288641   67149 cri.go:89] found id: ""
	I1028 18:30:50.288669   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.288678   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:50.288684   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:50.288739   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:50.322130   67149 cri.go:89] found id: ""
	I1028 18:30:50.322163   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.322174   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:50.322182   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:50.322240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:50.359508   67149 cri.go:89] found id: ""
	I1028 18:30:50.359536   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.359546   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:50.359554   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:50.359617   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:50.393571   67149 cri.go:89] found id: ""
	I1028 18:30:50.393607   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.393618   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:50.393626   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:50.393685   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:50.428683   67149 cri.go:89] found id: ""
	I1028 18:30:50.428705   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.428713   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:50.428719   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:50.428767   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:50.464086   67149 cri.go:89] found id: ""
	I1028 18:30:50.464111   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.464119   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:50.464125   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:50.464183   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:50.496695   67149 cri.go:89] found id: ""
	I1028 18:30:50.496726   67149 logs.go:282] 0 containers: []
	W1028 18:30:50.496736   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:50.496745   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:50.496755   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:50.545495   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:50.545526   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:50.558819   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:50.558852   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:50.636344   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:50.636369   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:50.636384   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:50.720270   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:50.720304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:49.612927   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.613353   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:49.402779   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.901517   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:51.873490   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:54.372373   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.261194   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:53.274451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:53.274507   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:53.306258   67149 cri.go:89] found id: ""
	I1028 18:30:53.306286   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.306295   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:53.306301   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:53.306362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:53.340222   67149 cri.go:89] found id: ""
	I1028 18:30:53.340244   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.340253   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:53.340258   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:53.340322   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:53.377726   67149 cri.go:89] found id: ""
	I1028 18:30:53.377750   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.377760   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:53.377767   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:53.377820   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:53.414228   67149 cri.go:89] found id: ""
	I1028 18:30:53.414252   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.414262   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:53.414275   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:53.414332   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:53.449152   67149 cri.go:89] found id: ""
	I1028 18:30:53.449179   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.449186   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:53.449192   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:53.449237   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:53.485678   67149 cri.go:89] found id: ""
	I1028 18:30:53.485705   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.485716   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:53.485723   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:53.485784   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:53.520764   67149 cri.go:89] found id: ""
	I1028 18:30:53.520791   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.520802   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:53.520810   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:53.520870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:53.561153   67149 cri.go:89] found id: ""
	I1028 18:30:53.561176   67149 logs.go:282] 0 containers: []
	W1028 18:30:53.561184   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:53.561192   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:53.561202   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:53.642192   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:53.642242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:53.686527   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:53.686567   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:53.740815   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:53.740849   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:53.754577   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:53.754604   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:53.823717   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:54.112985   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.612820   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:53.903128   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:55.903482   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.372798   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.871814   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:56.324847   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:56.338572   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:56.338628   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:56.375482   67149 cri.go:89] found id: ""
	I1028 18:30:56.375506   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.375517   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:56.375524   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:56.375580   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:56.407894   67149 cri.go:89] found id: ""
	I1028 18:30:56.407921   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.407931   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:56.407938   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:56.407993   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:56.447006   67149 cri.go:89] found id: ""
	I1028 18:30:56.447037   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.447048   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:56.447055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:56.447112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:56.483850   67149 cri.go:89] found id: ""
	I1028 18:30:56.483880   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.483890   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:56.483898   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:56.483958   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:56.520008   67149 cri.go:89] found id: ""
	I1028 18:30:56.520038   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.520045   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:56.520051   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:56.520099   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:56.552567   67149 cri.go:89] found id: ""
	I1028 18:30:56.552592   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.552600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:56.552608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:56.552658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:56.591277   67149 cri.go:89] found id: ""
	I1028 18:30:56.591297   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.591305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:56.591311   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:56.591362   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:56.632164   67149 cri.go:89] found id: ""
	I1028 18:30:56.632186   67149 logs.go:282] 0 containers: []
	W1028 18:30:56.632194   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:56.632202   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:56.632219   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:56.683590   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:56.683623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:56.698509   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:56.698539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:56.777141   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:56.777171   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:56.777188   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:56.851801   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:56.851842   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.394266   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:30:59.408460   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:30:59.408545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:30:59.444066   67149 cri.go:89] found id: ""
	I1028 18:30:59.444092   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.444104   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:30:59.444112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:30:59.444165   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:30:59.479531   67149 cri.go:89] found id: ""
	I1028 18:30:59.479557   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.479568   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:30:59.479576   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:30:59.479622   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:30:59.519467   67149 cri.go:89] found id: ""
	I1028 18:30:59.519489   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.519496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:30:59.519502   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:30:59.519546   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:30:59.551108   67149 cri.go:89] found id: ""
	I1028 18:30:59.551131   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.551140   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:30:59.551146   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:30:59.551197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:30:59.585875   67149 cri.go:89] found id: ""
	I1028 18:30:59.585899   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.585907   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:30:59.585912   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:30:59.585968   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:30:59.620571   67149 cri.go:89] found id: ""
	I1028 18:30:59.620595   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.620602   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:30:59.620608   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:30:59.620655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:30:59.653927   67149 cri.go:89] found id: ""
	I1028 18:30:59.653954   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.653965   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:30:59.653972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:30:59.654039   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:30:59.689138   67149 cri.go:89] found id: ""
	I1028 18:30:59.689160   67149 logs.go:282] 0 containers: []
	W1028 18:30:59.689168   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:30:59.689175   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:30:59.689185   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:30:59.768231   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:30:59.768270   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:30:59.811980   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:30:59.812007   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:30:59.864509   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:30:59.864543   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:30:59.879329   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:30:59.879354   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:30:59.950134   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:30:59.112280   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:01.113341   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:30:58.402845   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.902628   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.904642   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:00.872873   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:03.371672   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:02.450237   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:02.464689   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:02.464765   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:02.500938   67149 cri.go:89] found id: ""
	I1028 18:31:02.500964   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.500975   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:02.500982   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:02.501043   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:02.534580   67149 cri.go:89] found id: ""
	I1028 18:31:02.534608   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.534620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:02.534628   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:02.534684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:02.570203   67149 cri.go:89] found id: ""
	I1028 18:31:02.570224   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.570231   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:02.570237   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:02.570284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:02.606037   67149 cri.go:89] found id: ""
	I1028 18:31:02.606064   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.606072   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:02.606082   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:02.606135   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:02.640622   67149 cri.go:89] found id: ""
	I1028 18:31:02.640646   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.640656   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:02.640663   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:02.640723   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:02.676406   67149 cri.go:89] found id: ""
	I1028 18:31:02.676434   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.676444   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:02.676451   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:02.676520   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:02.710284   67149 cri.go:89] found id: ""
	I1028 18:31:02.710308   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.710316   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:02.710322   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:02.710376   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:02.750853   67149 cri.go:89] found id: ""
	I1028 18:31:02.750899   67149 logs.go:282] 0 containers: []
	W1028 18:31:02.750910   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:02.750918   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:02.750929   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:02.825886   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:02.825913   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:02.825927   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:02.904828   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:02.904857   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:02.941886   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:02.941922   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:02.991603   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:02.991632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.505655   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:05.520582   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:05.520638   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:05.558724   67149 cri.go:89] found id: ""
	I1028 18:31:05.558753   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.558763   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:05.558770   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:05.558816   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:05.597864   67149 cri.go:89] found id: ""
	I1028 18:31:05.597885   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.597893   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:05.597898   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:05.597956   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:05.643571   67149 cri.go:89] found id: ""
	I1028 18:31:05.643602   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.643613   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:05.643620   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:05.643679   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:05.682010   67149 cri.go:89] found id: ""
	I1028 18:31:05.682039   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.682048   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:05.682053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:05.682106   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:05.716043   67149 cri.go:89] found id: ""
	I1028 18:31:05.716067   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.716080   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:05.716086   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:05.716134   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:05.750962   67149 cri.go:89] found id: ""
	I1028 18:31:05.750995   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.751010   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:05.751016   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:05.751078   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:05.785059   67149 cri.go:89] found id: ""
	I1028 18:31:05.785111   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.785124   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:05.785132   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:05.785193   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:05.833525   67149 cri.go:89] found id: ""
	I1028 18:31:05.833550   67149 logs.go:282] 0 containers: []
	W1028 18:31:05.833559   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:05.833566   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:05.833579   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:05.887766   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:05.887796   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:05.902575   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:05.902606   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:05.975082   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:05.975108   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:05.975122   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:03.613265   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.114362   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.402167   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:07.402252   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:05.873147   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:08.370748   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:06.050369   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:06.050396   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.593506   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:08.606188   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:08.606251   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:08.645186   67149 cri.go:89] found id: ""
	I1028 18:31:08.645217   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.645227   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:08.645235   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:08.645294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:08.680728   67149 cri.go:89] found id: ""
	I1028 18:31:08.680759   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.680771   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:08.680778   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:08.680833   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:08.714733   67149 cri.go:89] found id: ""
	I1028 18:31:08.714760   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.714772   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:08.714779   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:08.714844   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:08.750293   67149 cri.go:89] found id: ""
	I1028 18:31:08.750323   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.750333   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:08.750339   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:08.750390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:08.784521   67149 cri.go:89] found id: ""
	I1028 18:31:08.784548   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.784559   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:08.784566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:08.784629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:08.818808   67149 cri.go:89] found id: ""
	I1028 18:31:08.818838   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.818849   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:08.818857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:08.818920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:08.855575   67149 cri.go:89] found id: ""
	I1028 18:31:08.855608   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.855619   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:08.855633   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:08.855690   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:08.892996   67149 cri.go:89] found id: ""
	I1028 18:31:08.893024   67149 logs.go:282] 0 containers: []
	W1028 18:31:08.893035   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:08.893045   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:08.893064   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:08.937056   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:08.937084   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:08.989013   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:08.989048   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:09.002048   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:09.002077   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:09.075247   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:09.075277   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:09.075290   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:08.612396   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.612689   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:09.402595   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.903403   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:10.371335   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:12.371435   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.371502   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:11.654701   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:11.668066   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:11.668146   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:11.701666   67149 cri.go:89] found id: ""
	I1028 18:31:11.701693   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.701703   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:11.701710   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:11.701769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:11.738342   67149 cri.go:89] found id: ""
	I1028 18:31:11.738368   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.738376   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:11.738381   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:11.738428   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:11.772009   67149 cri.go:89] found id: ""
	I1028 18:31:11.772035   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.772045   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:11.772053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:11.772118   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:11.816210   67149 cri.go:89] found id: ""
	I1028 18:31:11.816237   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.816245   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:11.816251   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:11.816314   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:11.856675   67149 cri.go:89] found id: ""
	I1028 18:31:11.856704   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.856714   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:11.856722   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:11.856785   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:11.896566   67149 cri.go:89] found id: ""
	I1028 18:31:11.896592   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.896600   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:11.896606   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:11.896665   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:11.932599   67149 cri.go:89] found id: ""
	I1028 18:31:11.932624   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.932633   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:11.932640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:11.932704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:11.966952   67149 cri.go:89] found id: ""
	I1028 18:31:11.966982   67149 logs.go:282] 0 containers: []
	W1028 18:31:11.967008   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:11.967019   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:11.967037   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:12.016465   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:12.016502   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:12.029314   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:12.029343   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:12.098906   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:12.098936   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:12.098954   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:12.176440   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:12.176489   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:14.720173   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:14.733796   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:14.733848   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:14.774072   67149 cri.go:89] found id: ""
	I1028 18:31:14.774093   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.774100   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:14.774106   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:14.774152   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:14.816116   67149 cri.go:89] found id: ""
	I1028 18:31:14.816145   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.816158   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:14.816166   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:14.816224   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:14.851167   67149 cri.go:89] found id: ""
	I1028 18:31:14.851189   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.851196   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:14.851202   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:14.851247   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:14.885887   67149 cri.go:89] found id: ""
	I1028 18:31:14.885918   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.885926   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:14.885931   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:14.885976   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:14.923787   67149 cri.go:89] found id: ""
	I1028 18:31:14.923815   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.923826   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:14.923833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:14.923892   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:14.960117   67149 cri.go:89] found id: ""
	I1028 18:31:14.960148   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.960160   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:14.960167   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:14.960240   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:14.998418   67149 cri.go:89] found id: ""
	I1028 18:31:14.998458   67149 logs.go:282] 0 containers: []
	W1028 18:31:14.998470   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:14.998485   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:14.998545   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:15.031985   67149 cri.go:89] found id: ""
	I1028 18:31:15.032005   67149 logs.go:282] 0 containers: []
	W1028 18:31:15.032014   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:15.032027   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:15.032038   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:15.045239   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:15.045264   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:15.118954   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:15.118978   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:15.118994   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:15.200538   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:15.200569   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:15.243581   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:15.243603   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:13.112232   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:15.113498   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.612946   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:14.401769   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.402729   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:16.871916   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.872378   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:17.794670   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:17.808325   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:17.808380   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:17.841888   67149 cri.go:89] found id: ""
	I1028 18:31:17.841911   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.841919   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:17.841925   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:17.841979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:17.881241   67149 cri.go:89] found id: ""
	I1028 18:31:17.881261   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.881269   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:17.881274   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:17.881331   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:17.922394   67149 cri.go:89] found id: ""
	I1028 18:31:17.922419   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.922428   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:17.922434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:17.922498   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:17.963519   67149 cri.go:89] found id: ""
	I1028 18:31:17.963546   67149 logs.go:282] 0 containers: []
	W1028 18:31:17.963558   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:17.963566   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:17.963641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:18.003181   67149 cri.go:89] found id: ""
	I1028 18:31:18.003202   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.003209   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:18.003214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:18.003261   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:18.040305   67149 cri.go:89] found id: ""
	I1028 18:31:18.040338   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.040348   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:18.040356   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:18.040413   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:18.077671   67149 cri.go:89] found id: ""
	I1028 18:31:18.077696   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.077708   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:18.077715   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:18.077777   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:18.116155   67149 cri.go:89] found id: ""
	I1028 18:31:18.116176   67149 logs.go:282] 0 containers: []
	W1028 18:31:18.116182   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:18.116190   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:18.116201   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:18.168343   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:18.168370   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:18.181962   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:18.181988   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:18.260227   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:18.260251   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:18.260265   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:18.346588   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:18.346620   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:20.885832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:20.899053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:20.899121   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:20.954770   67149 cri.go:89] found id: ""
	I1028 18:31:20.954797   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.954806   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:20.954812   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:20.954870   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:20.989809   67149 cri.go:89] found id: ""
	I1028 18:31:20.989834   67149 logs.go:282] 0 containers: []
	W1028 18:31:20.989842   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:20.989848   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:20.989900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:21.027150   67149 cri.go:89] found id: ""
	I1028 18:31:21.027179   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.027191   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:21.027199   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:21.027259   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:20.113283   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:22.612710   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:18.902738   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.403607   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.371574   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.871000   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:21.061235   67149 cri.go:89] found id: ""
	I1028 18:31:21.061260   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.061270   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:21.061277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:21.061337   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:21.095451   67149 cri.go:89] found id: ""
	I1028 18:31:21.095473   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.095481   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:21.095487   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:21.095540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:21.135576   67149 cri.go:89] found id: ""
	I1028 18:31:21.135598   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.135606   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:21.135612   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:21.135660   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:21.170816   67149 cri.go:89] found id: ""
	I1028 18:31:21.170845   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.170854   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:21.170860   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:21.170920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:21.204616   67149 cri.go:89] found id: ""
	I1028 18:31:21.204649   67149 logs.go:282] 0 containers: []
	W1028 18:31:21.204660   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:21.204672   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:21.204686   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:21.254523   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:21.254556   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:21.267981   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:21.268005   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:21.336786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:21.336813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:21.336828   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:21.420596   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:21.420625   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:23.962346   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:23.976628   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:23.976697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:24.016418   67149 cri.go:89] found id: ""
	I1028 18:31:24.016444   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.016453   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:24.016458   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:24.016533   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:24.051448   67149 cri.go:89] found id: ""
	I1028 18:31:24.051474   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.051483   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:24.051488   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:24.051554   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:24.090787   67149 cri.go:89] found id: ""
	I1028 18:31:24.090816   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.090829   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:24.090836   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:24.090900   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:24.126315   67149 cri.go:89] found id: ""
	I1028 18:31:24.126342   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.126349   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:24.126355   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:24.126402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:24.161340   67149 cri.go:89] found id: ""
	I1028 18:31:24.161367   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.161379   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:24.161387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:24.161445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:24.195991   67149 cri.go:89] found id: ""
	I1028 18:31:24.196017   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.196028   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:24.196036   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:24.196084   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:24.229789   67149 cri.go:89] found id: ""
	I1028 18:31:24.229822   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.229837   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:24.229845   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:24.229906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:24.264724   67149 cri.go:89] found id: ""
	I1028 18:31:24.264748   67149 logs.go:282] 0 containers: []
	W1028 18:31:24.264757   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:24.264765   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:24.264775   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:24.303551   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:24.303574   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:24.351693   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:24.351725   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:24.364537   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:24.364566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:24.436935   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:24.436955   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:24.436966   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:25.112870   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.612492   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:23.902008   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.902544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.902622   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:25.871089   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.871265   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:29.872201   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:27.014928   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:27.029540   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:27.029609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:27.064598   67149 cri.go:89] found id: ""
	I1028 18:31:27.064626   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.064636   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:27.064643   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:27.064704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:27.099432   67149 cri.go:89] found id: ""
	I1028 18:31:27.099455   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.099465   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:27.099472   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:27.099531   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:27.133961   67149 cri.go:89] found id: ""
	I1028 18:31:27.133996   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.134006   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:27.134012   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:27.134075   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:27.171976   67149 cri.go:89] found id: ""
	I1028 18:31:27.172003   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.172014   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:27.172022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:27.172092   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:27.205681   67149 cri.go:89] found id: ""
	I1028 18:31:27.205710   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.205721   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:27.205730   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:27.205793   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:27.244571   67149 cri.go:89] found id: ""
	I1028 18:31:27.244603   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.244612   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:27.244617   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:27.244661   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:27.281692   67149 cri.go:89] found id: ""
	I1028 18:31:27.281722   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.281738   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:27.281746   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:27.281800   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:27.335003   67149 cri.go:89] found id: ""
	I1028 18:31:27.335033   67149 logs.go:282] 0 containers: []
	W1028 18:31:27.335041   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:27.335049   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:27.335066   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:27.353992   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:27.354017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:27.457103   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:27.457125   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:27.457136   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:27.537717   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:27.537746   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:27.579842   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:27.579870   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.133749   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:30.147518   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:30.147576   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:30.182683   67149 cri.go:89] found id: ""
	I1028 18:31:30.182711   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.182722   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:30.182729   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:30.182792   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:30.215088   67149 cri.go:89] found id: ""
	I1028 18:31:30.215109   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.215118   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:30.215124   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:30.215176   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:30.250169   67149 cri.go:89] found id: ""
	I1028 18:31:30.250194   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.250202   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:30.250207   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:30.250284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:30.286028   67149 cri.go:89] found id: ""
	I1028 18:31:30.286055   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.286062   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:30.286069   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:30.286112   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:30.320503   67149 cri.go:89] found id: ""
	I1028 18:31:30.320528   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.320539   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:30.320547   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:30.320604   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:30.352773   67149 cri.go:89] found id: ""
	I1028 18:31:30.352793   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.352800   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:30.352806   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:30.352859   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:30.385922   67149 cri.go:89] found id: ""
	I1028 18:31:30.385944   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.385951   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:30.385956   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:30.385999   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:30.421909   67149 cri.go:89] found id: ""
	I1028 18:31:30.421933   67149 logs.go:282] 0 containers: []
	W1028 18:31:30.421945   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:30.421956   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:30.421971   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:30.470917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:30.470944   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:30.484033   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:30.484059   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:30.554810   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:30.554836   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:30.554850   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:30.634403   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:30.634432   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:30.113496   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.613397   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:30.402688   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:32.902277   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:31.872598   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:34.371198   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:33.182127   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:33.194994   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:33.195063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:33.233076   67149 cri.go:89] found id: ""
	I1028 18:31:33.233098   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.233106   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:33.233112   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:33.233160   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:33.266963   67149 cri.go:89] found id: ""
	I1028 18:31:33.266998   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.267021   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:33.267028   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:33.267083   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:33.305888   67149 cri.go:89] found id: ""
	I1028 18:31:33.305914   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.305922   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:33.305928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:33.305979   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:33.339451   67149 cri.go:89] found id: ""
	I1028 18:31:33.339479   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.339489   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:33.339496   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:33.339555   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:33.375038   67149 cri.go:89] found id: ""
	I1028 18:31:33.375065   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.375073   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:33.375079   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:33.375125   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:33.409157   67149 cri.go:89] found id: ""
	I1028 18:31:33.409176   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.409183   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:33.409189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:33.409243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:33.449108   67149 cri.go:89] found id: ""
	I1028 18:31:33.449133   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.449149   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:33.449155   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:33.449227   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:33.491194   67149 cri.go:89] found id: ""
	I1028 18:31:33.491215   67149 logs.go:282] 0 containers: []
	W1028 18:31:33.491224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:33.491232   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:33.491250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:33.530590   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:33.530618   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:33.581933   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:33.581962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:33.595387   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:33.595416   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:33.664855   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:33.664882   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:33.664899   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:35.113673   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.612606   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:35.401938   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:37.402270   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.372499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:38.372670   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:36.242724   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:36.256152   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:36.256221   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:36.292452   67149 cri.go:89] found id: ""
	I1028 18:31:36.292494   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.292504   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:36.292511   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:36.292568   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:36.325210   67149 cri.go:89] found id: ""
	I1028 18:31:36.325231   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.325238   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:36.325244   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:36.325293   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:36.356738   67149 cri.go:89] found id: ""
	I1028 18:31:36.356757   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.356764   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:36.356769   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:36.356827   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:36.389678   67149 cri.go:89] found id: ""
	I1028 18:31:36.389704   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.389712   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:36.389717   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:36.389775   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:36.422956   67149 cri.go:89] found id: ""
	I1028 18:31:36.422989   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.422998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:36.423005   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:36.423061   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:36.456877   67149 cri.go:89] found id: ""
	I1028 18:31:36.456904   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.456914   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:36.456921   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:36.456983   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:36.489728   67149 cri.go:89] found id: ""
	I1028 18:31:36.489758   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.489766   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:36.489772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:36.489829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:36.524307   67149 cri.go:89] found id: ""
	I1028 18:31:36.524338   67149 logs.go:282] 0 containers: []
	W1028 18:31:36.524350   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:36.524360   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:36.524372   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:36.574771   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:36.574800   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:36.587485   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:36.587506   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:36.655922   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:36.655949   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:36.655962   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:36.738312   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:36.738352   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.279425   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:39.293108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:39.293167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:39.325542   67149 cri.go:89] found id: ""
	I1028 18:31:39.325573   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.325584   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:39.325592   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:39.325656   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:39.357581   67149 cri.go:89] found id: ""
	I1028 18:31:39.357609   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.357620   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:39.357627   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:39.357681   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:39.394833   67149 cri.go:89] found id: ""
	I1028 18:31:39.394853   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.394860   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:39.394866   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:39.394916   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:39.430151   67149 cri.go:89] found id: ""
	I1028 18:31:39.430178   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.430188   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:39.430196   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:39.430254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:39.468060   67149 cri.go:89] found id: ""
	I1028 18:31:39.468089   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.468100   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:39.468108   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:39.468181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:39.503702   67149 cri.go:89] found id: ""
	I1028 18:31:39.503734   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.503752   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:39.503761   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:39.503829   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:39.536193   67149 cri.go:89] found id: ""
	I1028 18:31:39.536221   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.536233   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:39.536240   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:39.536305   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:39.570194   67149 cri.go:89] found id: ""
	I1028 18:31:39.570215   67149 logs.go:282] 0 containers: []
	W1028 18:31:39.570224   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:39.570232   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:39.570245   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:39.647179   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:39.647207   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:39.647220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:39.725980   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:39.726012   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:39.765671   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:39.765704   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:39.818257   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:39.818289   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:39.614055   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.112561   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:39.902061   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.402314   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:40.871483   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.872270   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:42.332335   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:42.344964   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:42.345031   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:42.380904   67149 cri.go:89] found id: ""
	I1028 18:31:42.380926   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.380933   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:42.380938   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:42.380982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:42.414361   67149 cri.go:89] found id: ""
	I1028 18:31:42.414385   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.414393   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:42.414399   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:42.414443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:42.447931   67149 cri.go:89] found id: ""
	I1028 18:31:42.447957   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.447968   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:42.447975   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:42.448024   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:42.483262   67149 cri.go:89] found id: ""
	I1028 18:31:42.483283   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.483296   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:42.483301   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:42.483365   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:42.516665   67149 cri.go:89] found id: ""
	I1028 18:31:42.516693   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.516702   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:42.516709   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:42.516776   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:42.550160   67149 cri.go:89] found id: ""
	I1028 18:31:42.550181   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.550188   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:42.550193   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:42.550238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:42.583509   67149 cri.go:89] found id: ""
	I1028 18:31:42.583535   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.583546   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:42.583552   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:42.583611   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:42.619276   67149 cri.go:89] found id: ""
	I1028 18:31:42.619312   67149 logs.go:282] 0 containers: []
	W1028 18:31:42.619320   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:42.619328   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:42.619338   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:42.692442   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:42.692487   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:42.731768   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:42.731798   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:42.783997   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:42.784043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:42.797809   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:42.797834   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:42.863351   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.363648   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:45.376277   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:45.376341   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:45.415231   67149 cri.go:89] found id: ""
	I1028 18:31:45.415255   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.415265   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:45.415273   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:45.415330   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:45.451133   67149 cri.go:89] found id: ""
	I1028 18:31:45.451157   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.451164   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:45.451170   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:45.451228   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:45.483526   67149 cri.go:89] found id: ""
	I1028 18:31:45.483552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.483562   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:45.483567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:45.483621   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:45.515799   67149 cri.go:89] found id: ""
	I1028 18:31:45.515828   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.515838   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:45.515846   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:45.515906   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:45.548043   67149 cri.go:89] found id: ""
	I1028 18:31:45.548071   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.548082   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:45.548090   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:45.548153   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:45.581525   67149 cri.go:89] found id: ""
	I1028 18:31:45.581552   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.581563   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:45.581570   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:45.581629   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:45.622258   67149 cri.go:89] found id: ""
	I1028 18:31:45.622282   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.622290   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:45.622296   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:45.622353   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:45.661255   67149 cri.go:89] found id: ""
	I1028 18:31:45.661275   67149 logs.go:282] 0 containers: []
	W1028 18:31:45.661284   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:45.661292   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:45.661304   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:45.675209   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:45.675242   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:45.737546   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:45.737573   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:45.737592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:45.816012   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:45.816043   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:45.854135   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:45.854167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:44.612155   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.612875   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:44.402557   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:46.902339   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:45.371918   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:47.872710   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.875644   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:48.406233   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:48.418950   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:48.419001   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:48.452933   67149 cri.go:89] found id: ""
	I1028 18:31:48.452952   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.452961   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:48.452975   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:48.453034   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:48.489604   67149 cri.go:89] found id: ""
	I1028 18:31:48.489630   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.489640   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:48.489648   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:48.489706   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:48.525463   67149 cri.go:89] found id: ""
	I1028 18:31:48.525493   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.525504   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:48.525511   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:48.525566   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:48.559266   67149 cri.go:89] found id: ""
	I1028 18:31:48.559294   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.559302   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:48.559308   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:48.559363   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:48.592670   67149 cri.go:89] found id: ""
	I1028 18:31:48.592695   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.592706   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:48.592714   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:48.592769   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:48.627175   67149 cri.go:89] found id: ""
	I1028 18:31:48.627196   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.627205   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:48.627213   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:48.627260   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:48.661864   67149 cri.go:89] found id: ""
	I1028 18:31:48.661887   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.661895   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:48.661901   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:48.661946   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:48.696731   67149 cri.go:89] found id: ""
	I1028 18:31:48.696756   67149 logs.go:282] 0 containers: []
	W1028 18:31:48.696765   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:48.696775   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:48.696788   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:48.745390   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:48.745417   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:48.759218   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:48.759241   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:48.830299   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:48.830331   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:48.830349   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:48.909934   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:48.909963   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:49.112884   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.613217   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:49.402707   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.903103   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:52.373283   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.872603   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:51.451597   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:51.464889   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:51.464943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:51.499962   67149 cri.go:89] found id: ""
	I1028 18:31:51.499990   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.500001   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:51.500010   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:51.500069   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:51.532341   67149 cri.go:89] found id: ""
	I1028 18:31:51.532370   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.532380   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:51.532388   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:51.532443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:51.565531   67149 cri.go:89] found id: ""
	I1028 18:31:51.565554   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.565561   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:51.565567   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:51.565614   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:51.602859   67149 cri.go:89] found id: ""
	I1028 18:31:51.602882   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.602894   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:51.602899   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:51.602943   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:51.639896   67149 cri.go:89] found id: ""
	I1028 18:31:51.639915   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.639922   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:51.639928   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:51.639972   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:51.675728   67149 cri.go:89] found id: ""
	I1028 18:31:51.675755   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.675762   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:51.675768   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:51.675825   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:51.710285   67149 cri.go:89] found id: ""
	I1028 18:31:51.710312   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.710320   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:51.710326   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:51.710374   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:51.744527   67149 cri.go:89] found id: ""
	I1028 18:31:51.744551   67149 logs.go:282] 0 containers: []
	W1028 18:31:51.744560   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:51.744570   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:51.744584   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:51.780580   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:51.780614   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:51.832979   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:51.833008   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:51.846389   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:51.846415   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:51.918177   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:51.918196   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:51.918210   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.493806   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:54.506468   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:54.506526   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:54.540500   67149 cri.go:89] found id: ""
	I1028 18:31:54.540527   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.540537   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:54.540544   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:54.540601   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:54.573399   67149 cri.go:89] found id: ""
	I1028 18:31:54.573428   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.573438   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:54.573448   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:54.573509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:54.606227   67149 cri.go:89] found id: ""
	I1028 18:31:54.606262   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.606272   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:54.606278   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:54.606338   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:54.641143   67149 cri.go:89] found id: ""
	I1028 18:31:54.641163   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.641172   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:54.641179   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:54.641238   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:54.674269   67149 cri.go:89] found id: ""
	I1028 18:31:54.674292   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.674300   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:54.674306   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:54.674352   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:54.707160   67149 cri.go:89] found id: ""
	I1028 18:31:54.707183   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.707191   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:54.707197   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:54.707242   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:54.746522   67149 cri.go:89] found id: ""
	I1028 18:31:54.746544   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.746552   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:54.746558   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:54.746613   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:54.779315   67149 cri.go:89] found id: ""
	I1028 18:31:54.779341   67149 logs.go:282] 0 containers: []
	W1028 18:31:54.779348   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:54.779356   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:54.779367   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:54.830987   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:54.831017   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:31:54.844846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:54.844871   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:54.913540   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:54.913558   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:54.913568   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:54.994220   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:54.994250   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:54.112785   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.114029   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:54.401657   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:56.402726   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.371756   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:59.372308   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:57.532820   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:31:57.545394   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:31:57.545454   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:31:57.582329   67149 cri.go:89] found id: ""
	I1028 18:31:57.582355   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.582365   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:31:57.582372   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:31:57.582438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:31:57.616082   67149 cri.go:89] found id: ""
	I1028 18:31:57.616107   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.616115   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:31:57.616123   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:31:57.616167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:31:57.650118   67149 cri.go:89] found id: ""
	I1028 18:31:57.650144   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.650153   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:31:57.650162   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:31:57.650215   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:31:57.684801   67149 cri.go:89] found id: ""
	I1028 18:31:57.684823   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.684831   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:31:57.684839   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:31:57.684887   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:31:57.722396   67149 cri.go:89] found id: ""
	I1028 18:31:57.722423   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.722431   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:31:57.722437   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:31:57.722516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:31:57.759779   67149 cri.go:89] found id: ""
	I1028 18:31:57.759802   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.759809   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:31:57.759818   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:31:57.759861   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:31:57.793977   67149 cri.go:89] found id: ""
	I1028 18:31:57.794034   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.794045   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:31:57.794053   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:31:57.794117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:31:57.831104   67149 cri.go:89] found id: ""
	I1028 18:31:57.831130   67149 logs.go:282] 0 containers: []
	W1028 18:31:57.831140   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:31:57.831151   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:31:57.831164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:31:57.920155   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:31:57.920174   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:31:57.920184   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:57.999677   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:31:57.999709   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:31:58.036647   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:31:58.036673   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:31:58.088299   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:31:58.088333   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.601832   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:00.615434   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:00.615491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:00.653344   67149 cri.go:89] found id: ""
	I1028 18:32:00.653372   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.653383   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:00.653390   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:00.653450   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:00.693086   67149 cri.go:89] found id: ""
	I1028 18:32:00.693111   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.693122   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:00.693130   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:00.693188   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:00.728129   67149 cri.go:89] found id: ""
	I1028 18:32:00.728157   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.728167   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:00.728181   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:00.728243   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:00.760540   67149 cri.go:89] found id: ""
	I1028 18:32:00.760568   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.760579   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:00.760586   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:00.760654   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:00.796633   67149 cri.go:89] found id: ""
	I1028 18:32:00.796662   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.796672   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:00.796680   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:00.796740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:00.829924   67149 cri.go:89] found id: ""
	I1028 18:32:00.829954   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.829966   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:00.829974   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:00.830028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:00.861565   67149 cri.go:89] found id: ""
	I1028 18:32:00.861586   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.861593   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:00.861599   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:00.861655   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:00.894129   67149 cri.go:89] found id: ""
	I1028 18:32:00.894154   67149 logs.go:282] 0 containers: []
	W1028 18:32:00.894162   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:00.894169   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:00.894180   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:00.908303   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:00.908331   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:00.974521   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:00.974543   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:00.974557   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:31:58.612554   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.612655   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:31:58.901908   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:00.902851   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.872423   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.873235   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:01.048113   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:01.048140   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:01.086657   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:01.086731   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.639781   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:03.652239   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:03.652291   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:03.687098   67149 cri.go:89] found id: ""
	I1028 18:32:03.687120   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.687129   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:03.687135   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:03.687181   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:03.722176   67149 cri.go:89] found id: ""
	I1028 18:32:03.722206   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.722217   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:03.722225   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:03.722282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:03.757489   67149 cri.go:89] found id: ""
	I1028 18:32:03.757512   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.757520   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:03.757526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:03.757571   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:03.795359   67149 cri.go:89] found id: ""
	I1028 18:32:03.795400   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.795411   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:03.795429   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:03.795489   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:03.830919   67149 cri.go:89] found id: ""
	I1028 18:32:03.830945   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.830953   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:03.830958   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:03.831008   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:03.863396   67149 cri.go:89] found id: ""
	I1028 18:32:03.863425   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.863437   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:03.863445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:03.863516   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:03.897085   67149 cri.go:89] found id: ""
	I1028 18:32:03.897112   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.897121   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:03.897128   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:03.897189   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:03.929439   67149 cri.go:89] found id: ""
	I1028 18:32:03.929467   67149 logs.go:282] 0 containers: []
	W1028 18:32:03.929478   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:03.929487   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:03.929503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:03.982917   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:03.982943   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:03.996333   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:03.996355   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:04.062786   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:04.062813   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:04.062827   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:04.143988   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:04.144016   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:03.113499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.612544   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.620294   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:03.402246   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:05.402730   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:07.904429   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.373120   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:08.871662   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:06.683977   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:06.696605   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:06.696680   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:06.733031   67149 cri.go:89] found id: ""
	I1028 18:32:06.733060   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.733070   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:06.733078   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:06.733138   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:06.769196   67149 cri.go:89] found id: ""
	I1028 18:32:06.769218   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.769225   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:06.769231   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:06.769280   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:06.806938   67149 cri.go:89] found id: ""
	I1028 18:32:06.806959   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.806966   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:06.806972   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:06.807017   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:06.839506   67149 cri.go:89] found id: ""
	I1028 18:32:06.839528   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.839537   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:06.839542   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:06.839587   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:06.878275   67149 cri.go:89] found id: ""
	I1028 18:32:06.878300   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.878309   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:06.878317   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:06.878382   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:06.916336   67149 cri.go:89] found id: ""
	I1028 18:32:06.916366   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.916374   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:06.916381   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:06.916434   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:06.971413   67149 cri.go:89] found id: ""
	I1028 18:32:06.971435   67149 logs.go:282] 0 containers: []
	W1028 18:32:06.971443   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:06.971449   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:06.971494   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:07.004432   67149 cri.go:89] found id: ""
	I1028 18:32:07.004464   67149 logs.go:282] 0 containers: []
	W1028 18:32:07.004485   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:07.004496   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:07.004509   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:07.081741   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:07.081780   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:07.122022   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:07.122053   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:07.169470   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:07.169496   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:07.183433   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:07.183459   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:07.251765   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:09.752773   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:09.766042   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:09.766119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:09.802881   67149 cri.go:89] found id: ""
	I1028 18:32:09.802911   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.802923   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:09.802930   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:09.802987   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:09.840269   67149 cri.go:89] found id: ""
	I1028 18:32:09.840292   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.840300   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:09.840305   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:09.840370   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:09.874654   67149 cri.go:89] found id: ""
	I1028 18:32:09.874679   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.874689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:09.874696   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:09.874752   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:09.910328   67149 cri.go:89] found id: ""
	I1028 18:32:09.910350   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.910358   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:09.910365   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:09.910425   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:09.942717   67149 cri.go:89] found id: ""
	I1028 18:32:09.942744   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.942752   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:09.942757   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:09.942814   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:09.975644   67149 cri.go:89] found id: ""
	I1028 18:32:09.975674   67149 logs.go:282] 0 containers: []
	W1028 18:32:09.975685   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:09.975692   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:09.975750   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:10.008257   67149 cri.go:89] found id: ""
	I1028 18:32:10.008294   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.008305   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:10.008313   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:10.008373   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:10.041678   67149 cri.go:89] found id: ""
	I1028 18:32:10.041705   67149 logs.go:282] 0 containers: []
	W1028 18:32:10.041716   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:10.041726   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:10.041739   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:10.090474   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:10.090503   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:10.103846   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:10.103874   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:10.172819   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:10.172847   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:10.172862   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:10.251927   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:10.251955   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:10.112553   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.113090   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:10.401890   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.902888   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:11.371860   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:13.373112   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:12.795985   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:12.810859   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:12.810921   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:12.849897   67149 cri.go:89] found id: ""
	I1028 18:32:12.849925   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.849934   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:12.849940   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:12.850003   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:12.883007   67149 cri.go:89] found id: ""
	I1028 18:32:12.883034   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.883045   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:12.883052   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:12.883111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:12.917458   67149 cri.go:89] found id: ""
	I1028 18:32:12.917485   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.917496   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:12.917503   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:12.917561   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:12.950531   67149 cri.go:89] found id: ""
	I1028 18:32:12.950558   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.950568   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:12.950576   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:12.950631   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:12.983902   67149 cri.go:89] found id: ""
	I1028 18:32:12.983929   67149 logs.go:282] 0 containers: []
	W1028 18:32:12.983937   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:12.983943   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:12.983986   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:13.017486   67149 cri.go:89] found id: ""
	I1028 18:32:13.017513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.017521   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:13.017526   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:13.017582   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:13.050553   67149 cri.go:89] found id: ""
	I1028 18:32:13.050582   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.050594   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:13.050601   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:13.050658   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:13.083489   67149 cri.go:89] found id: ""
	I1028 18:32:13.083513   67149 logs.go:282] 0 containers: []
	W1028 18:32:13.083520   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:13.083528   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:13.083537   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:13.137451   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:13.137482   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:13.153154   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:13.153179   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:13.221043   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:13.221066   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:13.221080   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:13.299930   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:13.299960   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:15.850484   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:15.862930   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:15.862982   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:15.895625   67149 cri.go:89] found id: ""
	I1028 18:32:15.895643   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.895651   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:15.895657   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:15.895701   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:15.928073   67149 cri.go:89] found id: ""
	I1028 18:32:15.928103   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.928113   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:15.928120   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:15.928180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:15.962261   67149 cri.go:89] found id: ""
	I1028 18:32:15.962282   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.962290   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:15.962295   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:15.962342   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:15.999177   67149 cri.go:89] found id: ""
	I1028 18:32:15.999206   67149 logs.go:282] 0 containers: []
	W1028 18:32:15.999216   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:15.999224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:15.999282   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:16.033098   67149 cri.go:89] found id: ""
	I1028 18:32:16.033126   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.033138   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:16.033145   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:16.033208   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:14.612739   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.112266   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.401576   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:17.401773   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:15.872114   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:18.372059   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:16.067049   67149 cri.go:89] found id: ""
	I1028 18:32:16.067071   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.067083   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:16.067089   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:16.067145   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:16.106936   67149 cri.go:89] found id: ""
	I1028 18:32:16.106970   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.106981   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:16.106988   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:16.107044   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:16.141702   67149 cri.go:89] found id: ""
	I1028 18:32:16.141729   67149 logs.go:282] 0 containers: []
	W1028 18:32:16.141741   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:16.141751   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:16.141762   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:16.178772   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:16.178803   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:16.230851   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:16.230878   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:16.244489   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:16.244514   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:16.319362   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:16.319389   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:16.319405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:18.899694   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:18.913287   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:18.913358   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:18.954136   67149 cri.go:89] found id: ""
	I1028 18:32:18.954158   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.954165   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:18.954170   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:18.954218   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:18.987427   67149 cri.go:89] found id: ""
	I1028 18:32:18.987449   67149 logs.go:282] 0 containers: []
	W1028 18:32:18.987457   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:18.987462   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:18.987505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:19.022067   67149 cri.go:89] found id: ""
	I1028 18:32:19.022099   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.022110   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:19.022118   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:19.022167   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:19.054533   67149 cri.go:89] found id: ""
	I1028 18:32:19.054560   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.054570   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:19.054578   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:19.054644   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:19.099324   67149 cri.go:89] found id: ""
	I1028 18:32:19.099356   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.099367   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:19.099375   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:19.099436   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:19.146437   67149 cri.go:89] found id: ""
	I1028 18:32:19.146463   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.146470   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:19.146478   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:19.146540   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:19.192027   67149 cri.go:89] found id: ""
	I1028 18:32:19.192053   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.192070   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:19.192078   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:19.192140   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:19.228411   67149 cri.go:89] found id: ""
	I1028 18:32:19.228437   67149 logs.go:282] 0 containers: []
	W1028 18:32:19.228447   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:19.228457   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:19.228480   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:19.313151   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:19.313183   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:19.352117   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:19.352142   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:19.402772   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:19.402805   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:19.416148   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:19.416167   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:19.483098   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:19.112720   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.611924   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:19.403635   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.902116   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:20.872280   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:22.872726   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:21.983420   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:21.997129   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:21.997180   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:22.035600   67149 cri.go:89] found id: ""
	I1028 18:32:22.035622   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.035631   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:22.035637   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:22.035684   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:22.073413   67149 cri.go:89] found id: ""
	I1028 18:32:22.073440   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.073450   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:22.073458   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:22.073505   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:22.108637   67149 cri.go:89] found id: ""
	I1028 18:32:22.108663   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.108673   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:22.108682   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:22.108740   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:22.145837   67149 cri.go:89] found id: ""
	I1028 18:32:22.145860   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.145867   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:22.145873   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:22.145928   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:22.183830   67149 cri.go:89] found id: ""
	I1028 18:32:22.183855   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.183864   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:22.183869   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:22.183917   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:22.221402   67149 cri.go:89] found id: ""
	I1028 18:32:22.221423   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.221430   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:22.221436   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:22.221484   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:22.262193   67149 cri.go:89] found id: ""
	I1028 18:32:22.262220   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.262229   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:22.262234   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:22.262297   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:22.298774   67149 cri.go:89] found id: ""
	I1028 18:32:22.298797   67149 logs.go:282] 0 containers: []
	W1028 18:32:22.298808   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:22.298819   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:22.298831   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:22.348677   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:22.348716   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:22.362199   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:22.362220   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:22.429304   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:22.429327   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:22.429345   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:22.511591   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:22.511623   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.049119   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:25.063910   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:25.063970   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:25.099795   67149 cri.go:89] found id: ""
	I1028 18:32:25.099822   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.099833   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:25.099840   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:25.099898   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:25.137957   67149 cri.go:89] found id: ""
	I1028 18:32:25.137985   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.137995   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:25.138002   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:25.138063   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:25.174687   67149 cri.go:89] found id: ""
	I1028 18:32:25.174715   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.174726   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:25.174733   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:25.174795   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:25.207039   67149 cri.go:89] found id: ""
	I1028 18:32:25.207067   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.207077   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:25.207084   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:25.207130   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:25.239961   67149 cri.go:89] found id: ""
	I1028 18:32:25.239990   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.239998   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:25.240004   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:25.240055   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:25.273823   67149 cri.go:89] found id: ""
	I1028 18:32:25.273848   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.273858   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:25.273865   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:25.273925   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:25.310725   67149 cri.go:89] found id: ""
	I1028 18:32:25.310754   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.310765   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:25.310772   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:25.310830   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:25.348724   67149 cri.go:89] found id: ""
	I1028 18:32:25.348749   67149 logs.go:282] 0 containers: []
	W1028 18:32:25.348760   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:25.348770   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:25.348784   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:25.430213   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:25.430243   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:25.472233   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:25.472263   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:25.525648   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:25.525676   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:25.538697   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:25.538721   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:25.606779   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:23.612901   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.112494   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:23.902733   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:26.402271   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:25.372428   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:27.870461   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:29.871824   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.107877   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:28.122241   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:28.122296   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:28.157042   67149 cri.go:89] found id: ""
	I1028 18:32:28.157070   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.157082   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:28.157089   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:28.157142   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:28.190625   67149 cri.go:89] found id: ""
	I1028 18:32:28.190648   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.190658   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:28.190666   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:28.190724   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:28.224528   67149 cri.go:89] found id: ""
	I1028 18:32:28.224551   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.224559   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:28.224565   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:28.224609   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:28.265073   67149 cri.go:89] found id: ""
	I1028 18:32:28.265100   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.265110   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:28.265116   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:28.265174   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:28.302598   67149 cri.go:89] found id: ""
	I1028 18:32:28.302623   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.302633   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:28.302640   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:28.302697   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:28.339757   67149 cri.go:89] found id: ""
	I1028 18:32:28.339781   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.339789   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:28.339794   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:28.339846   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:28.375185   67149 cri.go:89] found id: ""
	I1028 18:32:28.375213   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.375224   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:28.375231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:28.375294   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:28.413292   67149 cri.go:89] found id: ""
	I1028 18:32:28.413316   67149 logs.go:282] 0 containers: []
	W1028 18:32:28.413334   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:28.413344   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:28.413376   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:28.464069   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:28.464098   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:28.478275   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:28.478299   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:28.546483   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:28.546504   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:28.546515   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:28.623015   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:28.623041   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:28.613303   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.111518   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:28.403789   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:30.903113   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:32.371951   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:34.372820   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:31.161570   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:31.175056   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:31.175119   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:31.210163   67149 cri.go:89] found id: ""
	I1028 18:32:31.210187   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.210199   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:31.210207   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:31.210264   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:31.244605   67149 cri.go:89] found id: ""
	I1028 18:32:31.244630   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.244637   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:31.244643   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:31.244688   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:31.280793   67149 cri.go:89] found id: ""
	I1028 18:32:31.280818   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.280827   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:31.280833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:31.280890   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:31.314616   67149 cri.go:89] found id: ""
	I1028 18:32:31.314641   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.314649   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:31.314654   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:31.314709   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:31.349386   67149 cri.go:89] found id: ""
	I1028 18:32:31.349410   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.349417   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:31.349423   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:31.349469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:31.382831   67149 cri.go:89] found id: ""
	I1028 18:32:31.382861   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.382871   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:31.382879   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:31.382924   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:31.417365   67149 cri.go:89] found id: ""
	I1028 18:32:31.417391   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.417400   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:31.417410   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:31.417469   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:31.450631   67149 cri.go:89] found id: ""
	I1028 18:32:31.450660   67149 logs.go:282] 0 containers: []
	W1028 18:32:31.450672   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:31.450683   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:31.450697   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:31.488932   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:31.488959   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:31.539335   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:31.539361   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:31.552304   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:31.552328   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:31.629291   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:31.629308   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:31.629323   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.207517   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:34.221231   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:34.221310   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:34.255342   67149 cri.go:89] found id: ""
	I1028 18:32:34.255365   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.255373   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:34.255379   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:34.255438   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:34.303802   67149 cri.go:89] found id: ""
	I1028 18:32:34.303827   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.303836   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:34.303843   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:34.303896   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:34.339531   67149 cri.go:89] found id: ""
	I1028 18:32:34.339568   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.339579   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:34.339589   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:34.339653   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:34.374063   67149 cri.go:89] found id: ""
	I1028 18:32:34.374084   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.374094   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:34.374102   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:34.374155   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:34.410880   67149 cri.go:89] found id: ""
	I1028 18:32:34.410909   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.410918   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:34.410924   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:34.410971   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:34.445372   67149 cri.go:89] found id: ""
	I1028 18:32:34.445397   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.445408   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:34.445416   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:34.445474   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:34.477820   67149 cri.go:89] found id: ""
	I1028 18:32:34.477844   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.477851   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:34.477857   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:34.477909   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:34.517581   67149 cri.go:89] found id: ""
	I1028 18:32:34.517602   67149 logs.go:282] 0 containers: []
	W1028 18:32:34.517609   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:34.517618   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:34.517632   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:34.530407   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:34.530430   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:34.599055   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:34.599083   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:34.599096   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:34.681579   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:34.681612   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:34.720523   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:34.720550   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:33.111858   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.112216   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.613521   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:33.401782   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:35.402544   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.901848   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:36.871451   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.372642   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:37.272697   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:37.289091   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:37.289159   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:37.321600   67149 cri.go:89] found id: ""
	I1028 18:32:37.321628   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.321639   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:37.321647   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:37.321704   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:37.353296   67149 cri.go:89] found id: ""
	I1028 18:32:37.353324   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.353337   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:37.353343   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:37.353400   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:37.386299   67149 cri.go:89] found id: ""
	I1028 18:32:37.386321   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.386328   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:37.386333   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:37.386401   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:37.420992   67149 cri.go:89] found id: ""
	I1028 18:32:37.421026   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.421039   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:37.421047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:37.421117   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:37.456174   67149 cri.go:89] found id: ""
	I1028 18:32:37.456206   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.456217   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:37.456224   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:37.456284   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:37.491796   67149 cri.go:89] found id: ""
	I1028 18:32:37.491819   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.491827   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:37.491833   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:37.491878   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:37.529002   67149 cri.go:89] found id: ""
	I1028 18:32:37.529028   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.529039   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:37.529047   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:37.529111   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:37.568967   67149 cri.go:89] found id: ""
	I1028 18:32:37.568993   67149 logs.go:282] 0 containers: []
	W1028 18:32:37.569001   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:37.569010   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:37.569022   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:37.640041   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:37.640065   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:37.640076   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:37.725490   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:37.725524   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:37.771858   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:37.771879   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:37.821240   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:37.821271   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.334946   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:40.349147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:40.349216   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:40.383931   67149 cri.go:89] found id: ""
	I1028 18:32:40.383956   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.383966   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:40.383973   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:40.384028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:40.419877   67149 cri.go:89] found id: ""
	I1028 18:32:40.419905   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.419915   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:40.419922   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:40.419978   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:40.453659   67149 cri.go:89] found id: ""
	I1028 18:32:40.453681   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.453689   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:40.453695   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:40.453744   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:40.486299   67149 cri.go:89] found id: ""
	I1028 18:32:40.486326   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.486343   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:40.486350   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:40.486407   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:40.518309   67149 cri.go:89] found id: ""
	I1028 18:32:40.518334   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.518344   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:40.518351   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:40.518402   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:40.549008   67149 cri.go:89] found id: ""
	I1028 18:32:40.549040   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.549049   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:40.549055   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:40.549108   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:40.586157   67149 cri.go:89] found id: ""
	I1028 18:32:40.586177   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.586184   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:40.586189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:40.586232   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:40.621107   67149 cri.go:89] found id: ""
	I1028 18:32:40.621133   67149 logs.go:282] 0 containers: []
	W1028 18:32:40.621144   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:40.621153   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:40.621164   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:40.633793   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:40.633816   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:40.700370   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:40.700393   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:40.700405   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:40.780964   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:40.780993   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:40.819904   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:40.819928   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:40.112755   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:42.113116   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:39.903476   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.904639   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:41.872360   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.371399   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:43.371487   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:43.384387   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:43.384445   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:43.419889   67149 cri.go:89] found id: ""
	I1028 18:32:43.419922   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.419931   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:43.419937   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:43.419997   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:43.455177   67149 cri.go:89] found id: ""
	I1028 18:32:43.455209   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.455219   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:43.455227   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:43.455295   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:43.493070   67149 cri.go:89] found id: ""
	I1028 18:32:43.493094   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.493104   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:43.493111   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:43.493170   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:43.526164   67149 cri.go:89] found id: ""
	I1028 18:32:43.526191   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.526199   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:43.526205   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:43.526254   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:43.559225   67149 cri.go:89] found id: ""
	I1028 18:32:43.559252   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.559263   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:43.559270   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:43.559323   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:43.597178   67149 cri.go:89] found id: ""
	I1028 18:32:43.597198   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.597206   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:43.597212   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:43.597276   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:43.633179   67149 cri.go:89] found id: ""
	I1028 18:32:43.633200   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.633209   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:43.633214   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:43.633290   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:43.669567   67149 cri.go:89] found id: ""
	I1028 18:32:43.669596   67149 logs.go:282] 0 containers: []
	W1028 18:32:43.669605   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:43.669615   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:43.669631   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:43.737618   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:43.737638   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:43.737650   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:43.821394   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:43.821425   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:43.859924   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:43.859950   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:43.913539   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:43.913566   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:44.611539   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.613781   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:44.401399   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.401930   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.371445   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.372075   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:46.429021   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:46.443137   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:46.443197   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:46.480363   67149 cri.go:89] found id: ""
	I1028 18:32:46.480385   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.480394   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:46.480400   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:46.480452   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:46.514702   67149 cri.go:89] found id: ""
	I1028 18:32:46.514731   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.514738   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:46.514744   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:46.514796   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:46.546829   67149 cri.go:89] found id: ""
	I1028 18:32:46.546857   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.546868   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:46.546874   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:46.546920   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:46.580372   67149 cri.go:89] found id: ""
	I1028 18:32:46.580398   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.580407   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:46.580415   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:46.580491   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:46.615455   67149 cri.go:89] found id: ""
	I1028 18:32:46.615479   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.615489   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:46.615497   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:46.615556   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:46.649547   67149 cri.go:89] found id: ""
	I1028 18:32:46.649570   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.649577   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:46.649583   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:46.649641   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:46.684744   67149 cri.go:89] found id: ""
	I1028 18:32:46.684768   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.684779   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:46.684787   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:46.684852   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:46.725530   67149 cri.go:89] found id: ""
	I1028 18:32:46.725558   67149 logs.go:282] 0 containers: []
	W1028 18:32:46.725569   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:46.725578   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:46.725592   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:46.794487   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:46.794506   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:46.794517   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:46.881407   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:46.881438   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:46.921649   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:46.921671   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:46.972915   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:46.972947   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.486835   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:49.501445   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:32:49.501509   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:32:49.537356   67149 cri.go:89] found id: ""
	I1028 18:32:49.537377   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.537384   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:32:49.537389   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:32:49.537443   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:32:49.568514   67149 cri.go:89] found id: ""
	I1028 18:32:49.568541   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.568549   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:32:49.568555   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:32:49.568610   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:32:49.602300   67149 cri.go:89] found id: ""
	I1028 18:32:49.602324   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.602333   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:32:49.602342   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:32:49.602390   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:32:49.640326   67149 cri.go:89] found id: ""
	I1028 18:32:49.640356   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.640366   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:32:49.640376   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:32:49.640437   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:32:49.675145   67149 cri.go:89] found id: ""
	I1028 18:32:49.675175   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.675183   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:32:49.675189   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:32:49.675235   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:32:49.711104   67149 cri.go:89] found id: ""
	I1028 18:32:49.711129   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.711139   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:32:49.711147   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:32:49.711206   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:32:49.748316   67149 cri.go:89] found id: ""
	I1028 18:32:49.748366   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.748378   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:32:49.748385   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:32:49.748441   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:32:49.781620   67149 cri.go:89] found id: ""
	I1028 18:32:49.781646   67149 logs.go:282] 0 containers: []
	W1028 18:32:49.781656   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:32:49.781665   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:32:49.781679   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:32:49.795119   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:32:49.795143   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:32:49.870438   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:32:49.870519   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:32:49.870539   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:32:49.956845   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:32:49.956875   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 18:32:49.993067   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:32:49.993097   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:32:49.112102   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:51.612691   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:48.901950   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.902354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.903627   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:50.871412   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.871499   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:54.874588   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:52.543260   67149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:32:52.556524   67149 kubeadm.go:597] duration metric: took 4m2.404527005s to restartPrimaryControlPlane
	W1028 18:32:52.556602   67149 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:52.556639   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:32:53.011065   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:32:53.026226   67149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:32:53.035868   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:32:53.045257   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:32:53.045271   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:32:53.045302   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:32:53.054383   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:32:53.054430   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:32:53.063665   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:32:53.073006   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:32:53.073054   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:32:53.083156   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.092700   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:32:53.092742   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:32:53.102374   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:32:53.112072   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:32:53.112121   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:32:53.122102   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:32:53.347625   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:32:53.613118   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:56.111841   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:55.402354   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.902406   66801 pod_ready.go:103] pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:57.371909   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:59.872630   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.112962   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:00.613499   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:32:58.896006   66801 pod_ready.go:82] duration metric: took 4m0.00005957s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" ...
	E1028 18:32:58.896033   66801 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vgd8k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:32:58.896052   66801 pod_ready.go:39] duration metric: took 4m13.055181811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:32:58.896092   66801 kubeadm.go:597] duration metric: took 4m21.540757653s to restartPrimaryControlPlane
	W1028 18:32:58.896147   66801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:32:58.896173   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:02.372443   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:04.871981   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:03.113038   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:05.114488   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:07.612365   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:06.872705   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.371018   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:09.612856   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:12.114228   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:11.371831   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:13.372636   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:14.613213   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.113328   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:15.871907   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:17.872203   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:19.612892   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:21.613052   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:20.370964   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:22.371880   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:24.372718   67489 pod_ready.go:103] pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:25.039296   66801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.14309835s)
	I1028 18:33:25.039378   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:25.056172   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:25.066775   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:25.077717   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:25.077734   66801 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:25.077770   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:33:25.086924   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:25.086968   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:25.096867   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:33:25.106162   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:25.106205   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:25.117015   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.126191   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:25.126245   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:25.135691   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:33:25.144827   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:25.144867   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:25.153834   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:25.201789   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:33:25.201866   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:33:25.306568   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:33:25.306717   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:33:25.306845   66801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:33:25.314339   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:33:25.316173   66801 out.go:235]   - Generating certificates and keys ...
	I1028 18:33:25.316271   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:33:25.316345   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:33:25.316463   66801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:33:25.316571   66801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:33:25.316688   66801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:33:25.316768   66801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:33:25.316857   66801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:33:25.316943   66801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:33:25.317047   66801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:33:25.317149   66801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:33:25.317209   66801 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:33:25.317299   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:33:25.643056   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:33:25.723345   66801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:33:25.831628   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:33:25.908255   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:33:26.215149   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:33:26.215654   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:33:26.218291   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:33:24.111834   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.113295   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:26.220065   66801 out.go:235]   - Booting up control plane ...
	I1028 18:33:26.220170   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:33:26.220251   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:33:26.220336   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:33:26.239633   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:33:26.245543   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:33:26.245612   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:33:26.378154   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:33:26.378332   66801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:33:26.879957   66801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.937575ms
	I1028 18:33:26.880090   66801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:33:26.365771   67489 pod_ready.go:82] duration metric: took 4m0.000286415s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:26.365796   67489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dz4nl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:26.365812   67489 pod_ready.go:39] duration metric: took 4m12.539631154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:26.365837   67489 kubeadm.go:597] duration metric: took 4m19.835720994s to restartPrimaryControlPlane
	W1028 18:33:26.365884   67489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:26.365910   67489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:31.882091   66801 kubeadm.go:310] [api-check] The API server is healthy after 5.002114527s
	I1028 18:33:31.897915   66801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:33:31.914311   66801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:33:31.943604   66801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:33:31.943859   66801 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-051152 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:33:31.954350   66801 kubeadm.go:310] [bootstrap-token] Using token: h7eyzq.87sgylc03ke6zhfy
	I1028 18:33:28.613480   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.113034   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:31.955444   66801 out.go:235]   - Configuring RBAC rules ...
	I1028 18:33:31.955591   66801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:33:31.960749   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:33:31.967695   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:33:31.970863   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:33:31.973924   66801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:33:31.979191   66801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:33:32.291512   66801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:33:32.714999   66801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:33:33.291889   66801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:33:33.293069   66801 kubeadm.go:310] 
	I1028 18:33:33.293167   66801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:33:33.293182   66801 kubeadm.go:310] 
	I1028 18:33:33.293255   66801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:33:33.293268   66801 kubeadm.go:310] 
	I1028 18:33:33.293307   66801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:33:33.293372   66801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:33:33.293435   66801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:33:33.293447   66801 kubeadm.go:310] 
	I1028 18:33:33.293518   66801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:33:33.293526   66801 kubeadm.go:310] 
	I1028 18:33:33.293595   66801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:33:33.293624   66801 kubeadm.go:310] 
	I1028 18:33:33.293712   66801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:33:33.293842   66801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:33:33.293946   66801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:33:33.293960   66801 kubeadm.go:310] 
	I1028 18:33:33.294117   66801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:33:33.294196   66801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:33:33.294203   66801 kubeadm.go:310] 
	I1028 18:33:33.294276   66801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294385   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:33:33.294414   66801 kubeadm.go:310] 	--control-plane 
	I1028 18:33:33.294427   66801 kubeadm.go:310] 
	I1028 18:33:33.294515   66801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:33:33.294525   66801 kubeadm.go:310] 
	I1028 18:33:33.294629   66801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h7eyzq.87sgylc03ke6zhfy \
	I1028 18:33:33.294774   66801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:33:33.295715   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:33:33.295839   66801 cni.go:84] Creating CNI manager for ""
	I1028 18:33:33.295852   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:33:33.297447   66801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:33:33.298607   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:33:33.311113   66801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:33:33.329576   66801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:33:33.329634   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:33.329680   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-051152 minikube.k8s.io/updated_at=2024_10_28T18_33_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=no-preload-051152 minikube.k8s.io/primary=true
	I1028 18:33:33.355186   66801 ops.go:34] apiserver oom_adj: -16
	I1028 18:33:33.509281   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.009672   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:34.509515   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.010084   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:35.509359   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.009689   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:36.509671   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.009884   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.510004   66801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:33:37.615853   66801 kubeadm.go:1113] duration metric: took 4.286272328s to wait for elevateKubeSystemPrivileges
	I1028 18:33:37.615890   66801 kubeadm.go:394] duration metric: took 5m0.313982235s to StartCluster
	I1028 18:33:37.615913   66801 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.616000   66801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:33:37.618418   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:33:37.618741   66801 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.78 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:33:37.618857   66801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:33:37.618951   66801 addons.go:69] Setting storage-provisioner=true in profile "no-preload-051152"
	I1028 18:33:37.618963   66801 config.go:182] Loaded profile config "no-preload-051152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:33:37.618975   66801 addons.go:69] Setting default-storageclass=true in profile "no-preload-051152"
	I1028 18:33:37.619001   66801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-051152"
	I1028 18:33:37.618973   66801 addons.go:234] Setting addon storage-provisioner=true in "no-preload-051152"
	W1028 18:33:37.619019   66801 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:33:37.619012   66801 addons.go:69] Setting metrics-server=true in profile "no-preload-051152"
	I1028 18:33:37.619043   66801 addons.go:234] Setting addon metrics-server=true in "no-preload-051152"
	I1028 18:33:37.619047   66801 host.go:66] Checking if "no-preload-051152" exists ...
	W1028 18:33:37.619056   66801 addons.go:243] addon metrics-server should already be in state true
	I1028 18:33:37.619097   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.619417   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619446   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619472   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619488   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.619487   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.619521   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.620738   66801 out.go:177] * Verifying Kubernetes components...
	I1028 18:33:37.622165   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:33:37.636006   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1028 18:33:37.636285   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I1028 18:33:37.636536   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.636621   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.637055   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637082   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637344   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.637368   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.637419   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637634   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.637811   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.638112   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.638157   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.638738   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1028 18:33:37.639176   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.639609   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.639632   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.639918   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.640333   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.640375   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.641571   66801 addons.go:234] Setting addon default-storageclass=true in "no-preload-051152"
	W1028 18:33:37.641592   66801 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:33:37.641620   66801 host.go:66] Checking if "no-preload-051152" exists ...
	I1028 18:33:37.641947   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.641981   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.657758   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I1028 18:33:37.657834   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I1028 18:33:37.657942   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I1028 18:33:37.658187   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658335   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.658739   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658752   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658877   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.658896   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.658931   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.659309   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659358   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.659409   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.659428   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.659552   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.659934   66801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:33:37.659964   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:33:37.660163   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.660406   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.661568   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.662429   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.663435   66801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:33:37.664414   66801 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:33:33.613699   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:36.111831   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:37.665306   66801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.665324   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:33:37.665343   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.666055   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:33:37.666073   66801 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:33:37.666092   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.668918   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669385   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669519   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.669543   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.669754   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.669942   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.670093   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.670266   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.670513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.670556   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.670719   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.670851   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.671014   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.671115   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.677419   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I1028 18:33:37.677828   66801 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:33:37.678184   66801 main.go:141] libmachine: Using API Version  1
	I1028 18:33:37.678201   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:33:37.678476   66801 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:33:37.678686   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetState
	I1028 18:33:37.680177   66801 main.go:141] libmachine: (no-preload-051152) Calling .DriverName
	I1028 18:33:37.680403   66801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.680420   66801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:33:37.680437   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHHostname
	I1028 18:33:37.683981   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684513   66801 main.go:141] libmachine: (no-preload-051152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:67:79", ip: ""} in network mk-no-preload-051152: {Iface:virbr3 ExpiryTime:2024-10-28 19:28:11 +0000 UTC Type:0 Mac:52:54:00:00:67:79 Iaid: IPaddr:192.168.61.78 Prefix:24 Hostname:no-preload-051152 Clientid:01:52:54:00:00:67:79}
	I1028 18:33:37.684534   66801 main.go:141] libmachine: (no-preload-051152) DBG | domain no-preload-051152 has defined IP address 192.168.61.78 and MAC address 52:54:00:00:67:79 in network mk-no-preload-051152
	I1028 18:33:37.684798   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHPort
	I1028 18:33:37.685007   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHKeyPath
	I1028 18:33:37.685153   66801 main.go:141] libmachine: (no-preload-051152) Calling .GetSSHUsername
	I1028 18:33:37.685307   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/no-preload-051152/id_rsa Username:docker}
	I1028 18:33:37.832104   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:33:37.859406   66801 node_ready.go:35] waiting up to 6m0s for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873437   66801 node_ready.go:49] node "no-preload-051152" has status "Ready":"True"
	I1028 18:33:37.873460   66801 node_ready.go:38] duration metric: took 14.023686ms for node "no-preload-051152" to be "Ready" ...
	I1028 18:33:37.873470   66801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:37.888286   66801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:37.917341   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:33:37.917363   66801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:33:37.948690   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:33:37.948716   66801 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:33:37.967948   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:33:37.971737   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:33:37.998758   66801 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:37.998782   66801 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:33:38.034907   66801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:33:38.924695   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924720   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.924762   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.924828   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925048   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925079   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925093   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925105   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925128   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.925131   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925142   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925153   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925154   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.925164   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.925372   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.925397   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.925382   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926852   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.926857   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.926872   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:38.955462   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:38.955492   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:38.955858   66801 main.go:141] libmachine: (no-preload-051152) DBG | Closing plugin on server side
	I1028 18:33:38.955938   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:38.955953   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373144   66801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.338192413s)
	I1028 18:33:39.373209   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373224   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373512   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373529   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373537   66801 main.go:141] libmachine: Making call to close driver server
	I1028 18:33:39.373544   66801 main.go:141] libmachine: (no-preload-051152) Calling .Close
	I1028 18:33:39.373761   66801 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:33:39.373775   66801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:33:39.373785   66801 addons.go:475] Verifying addon metrics-server=true in "no-preload-051152"
	I1028 18:33:39.375584   66801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:33:38.113078   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:40.612141   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.612763   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:39.377031   66801 addons.go:510] duration metric: took 1.758176418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:33:39.906691   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:42.396083   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:44.894264   66801 pod_ready.go:103] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:46.396937   66801 pod_ready.go:93] pod "etcd-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.397023   66801 pod_ready.go:82] duration metric: took 8.508709164s for pod "etcd-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.397048   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402560   66801 pod_ready.go:93] pod "kube-apiserver-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.402579   66801 pod_ready.go:82] duration metric: took 5.5155ms for pod "kube-apiserver-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.402588   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406630   66801 pod_ready.go:93] pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.406646   66801 pod_ready.go:82] duration metric: took 4.052513ms for pod "kube-controller-manager-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.406654   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411238   66801 pod_ready.go:93] pod "kube-proxy-28qht" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.411253   66801 pod_ready.go:82] duration metric: took 4.592983ms for pod "kube-proxy-28qht" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.411260   66801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414867   66801 pod_ready.go:93] pod "kube-scheduler-no-preload-051152" in "kube-system" namespace has status "Ready":"True"
	I1028 18:33:46.414880   66801 pod_ready.go:82] duration metric: took 3.615132ms for pod "kube-scheduler-no-preload-051152" in "kube-system" namespace to be "Ready" ...
	I1028 18:33:46.414886   66801 pod_ready.go:39] duration metric: took 8.541406133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:46.414900   66801 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:33:46.414943   66801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:33:46.430889   66801 api_server.go:72] duration metric: took 8.81211088s to wait for apiserver process to appear ...
	I1028 18:33:46.430907   66801 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:33:46.430925   66801 api_server.go:253] Checking apiserver healthz at https://192.168.61.78:8443/healthz ...
	I1028 18:33:46.435248   66801 api_server.go:279] https://192.168.61.78:8443/healthz returned 200:
	ok
	I1028 18:33:46.435963   66801 api_server.go:141] control plane version: v1.31.2
	I1028 18:33:46.435978   66801 api_server.go:131] duration metric: took 5.065719ms to wait for apiserver health ...
	I1028 18:33:46.435984   66801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:33:46.596186   66801 system_pods.go:59] 9 kube-system pods found
	I1028 18:33:46.596222   66801 system_pods.go:61] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.596230   66801 system_pods.go:61] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.596234   66801 system_pods.go:61] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.596238   66801 system_pods.go:61] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.596242   66801 system_pods.go:61] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.596246   66801 system_pods.go:61] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.596252   66801 system_pods.go:61] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.596301   66801 system_pods.go:61] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.596317   66801 system_pods.go:61] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.596324   66801 system_pods.go:74] duration metric: took 160.335823ms to wait for pod list to return data ...
	I1028 18:33:46.596341   66801 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:33:46.793115   66801 default_sa.go:45] found service account: "default"
	I1028 18:33:46.793147   66801 default_sa.go:55] duration metric: took 196.795286ms for default service account to be created ...
	I1028 18:33:46.793157   66801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:33:46.995868   66801 system_pods.go:86] 9 kube-system pods found
	I1028 18:33:46.995899   66801 system_pods.go:89] "coredns-7c65d6cfc9-mxhp2" [4aec7fb0-910f-48c1-ad4b-8bb21fd7e24d] Running
	I1028 18:33:46.995905   66801 system_pods.go:89] "coredns-7c65d6cfc9-sx5qg" [e687b4d1-ab2e-4084-b1b0-f15b5e7817af] Running
	I1028 18:33:46.995909   66801 system_pods.go:89] "etcd-no-preload-051152" [9a5f8fcb-6ced-4b05-945b-8b1097cb5c78] Running
	I1028 18:33:46.995912   66801 system_pods.go:89] "kube-apiserver-no-preload-051152" [c1a672ae-611a-42f3-91b7-fdcb5826ca93] Running
	I1028 18:33:46.995917   66801 system_pods.go:89] "kube-controller-manager-no-preload-051152" [45484833-6ba8-48bc-8902-c1e18a2b623b] Running
	I1028 18:33:46.995920   66801 system_pods.go:89] "kube-proxy-28qht" [710be347-bd18-4873-be61-1ccfd2088686] Running
	I1028 18:33:46.995924   66801 system_pods.go:89] "kube-scheduler-no-preload-051152" [e2d55fc7-2b69-44ed-993d-2b9003351776] Running
	I1028 18:33:46.995929   66801 system_pods.go:89] "metrics-server-6867b74b74-9rh4q" [24f7156f-c19f-4d0b-8d23-c88e0fe571de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:33:46.995934   66801 system_pods.go:89] "storage-provisioner" [3fb18822-fcad-4041-9ac9-644b101d8ca4] Running
	I1028 18:33:46.995941   66801 system_pods.go:126] duration metric: took 202.778451ms to wait for k8s-apps to be running ...
	I1028 18:33:46.995946   66801 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:33:46.995990   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:47.011260   66801 system_svc.go:56] duration metric: took 15.302599ms WaitForService to wait for kubelet
	I1028 18:33:47.011285   66801 kubeadm.go:582] duration metric: took 9.392510785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:33:47.011303   66801 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:33:47.193217   66801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:33:47.193239   66801 node_conditions.go:123] node cpu capacity is 2
	I1028 18:33:47.193250   66801 node_conditions.go:105] duration metric: took 181.942948ms to run NodePressure ...
	I1028 18:33:47.193261   66801 start.go:241] waiting for startup goroutines ...
	I1028 18:33:47.193267   66801 start.go:246] waiting for cluster config update ...
	I1028 18:33:47.193278   66801 start.go:255] writing updated cluster config ...
	I1028 18:33:47.193529   66801 ssh_runner.go:195] Run: rm -f paused
	I1028 18:33:47.240247   66801 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:33:47.242139   66801 out.go:177] * Done! kubectl is now configured to use "no-preload-051152" cluster and "default" namespace by default
	I1028 18:33:45.112037   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:47.112764   66600 pod_ready.go:103] pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace has status "Ready":"False"
	I1028 18:33:48.107354   66600 pod_ready.go:82] duration metric: took 4m0.001062902s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" ...
	E1028 18:33:48.107377   66600 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gg8bl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 18:33:48.107395   66600 pod_ready.go:39] duration metric: took 4m13.535788316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:33:48.107420   66600 kubeadm.go:597] duration metric: took 4m22.316644235s to restartPrimaryControlPlane
	W1028 18:33:48.107467   66600 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 18:33:48.107490   66600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:33:52.667497   67489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.301566887s)
	I1028 18:33:52.667559   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:33:52.683580   67489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:33:52.695334   67489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:33:52.705505   67489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:33:52.705524   67489 kubeadm.go:157] found existing configuration files:
	
	I1028 18:33:52.705569   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 18:33:52.714922   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:33:52.714969   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:33:52.724156   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 18:33:52.733125   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:33:52.733161   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:33:52.742369   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.751021   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:33:52.751065   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:33:52.760543   67489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 18:33:52.770939   67489 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:33:52.770985   67489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:33:52.781890   67489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:33:52.961562   67489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:01.798408   67489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:01.798470   67489 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:01.798580   67489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:01.798724   67489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:01.798811   67489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:01.798882   67489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:01.800228   67489 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:01.800320   67489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:01.800392   67489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:01.800486   67489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:01.800580   67489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:01.800641   67489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:01.800694   67489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:01.800764   67489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:01.800842   67489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:01.800955   67489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:01.801019   67489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:01.801053   67489 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:01.801102   67489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:01.801145   67489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:01.801196   67489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:01.801252   67489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:01.801316   67489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:01.801409   67489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:01.801513   67489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:01.801605   67489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:01.802967   67489 out.go:235]   - Booting up control plane ...
	I1028 18:34:01.803061   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:01.803169   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:01.803254   67489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:01.803376   67489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:01.803488   67489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:01.803558   67489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:01.803685   67489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:01.803800   67489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:01.803869   67489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.148945ms
	I1028 18:34:01.803933   67489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:01.803986   67489 kubeadm.go:310] [api-check] The API server is healthy after 5.003798359s
	I1028 18:34:01.804081   67489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:01.804187   67489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:01.804240   67489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:01.804438   67489 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-692033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:01.804533   67489 kubeadm.go:310] [bootstrap-token] Using token: wy8zqj.38m6tcr6hp7sgzod
	I1028 18:34:01.805760   67489 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:01.805856   67489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:01.805949   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:01.806108   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:01.806233   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:01.806378   67489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:01.806464   67489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:01.806579   67489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:01.806633   67489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:01.806673   67489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:01.806679   67489 kubeadm.go:310] 
	I1028 18:34:01.806735   67489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:01.806746   67489 kubeadm.go:310] 
	I1028 18:34:01.806836   67489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:01.806844   67489 kubeadm.go:310] 
	I1028 18:34:01.806880   67489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:01.806957   67489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:01.807001   67489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:01.807007   67489 kubeadm.go:310] 
	I1028 18:34:01.807060   67489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:01.807071   67489 kubeadm.go:310] 
	I1028 18:34:01.807112   67489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:01.807118   67489 kubeadm.go:310] 
	I1028 18:34:01.807171   67489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:01.807246   67489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:01.807307   67489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:01.807313   67489 kubeadm.go:310] 
	I1028 18:34:01.807387   67489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:01.807454   67489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:01.807465   67489 kubeadm.go:310] 
	I1028 18:34:01.807538   67489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807634   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:01.807655   67489 kubeadm.go:310] 	--control-plane 
	I1028 18:34:01.807661   67489 kubeadm.go:310] 
	I1028 18:34:01.807730   67489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:01.807739   67489 kubeadm.go:310] 
	I1028 18:34:01.807810   67489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wy8zqj.38m6tcr6hp7sgzod \
	I1028 18:34:01.807913   67489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:01.807923   67489 cni.go:84] Creating CNI manager for ""
	I1028 18:34:01.807929   67489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:01.809168   67489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:01.810293   67489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:01.822030   67489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:01.842831   67489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:01.842908   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:01.842963   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-692033 minikube.k8s.io/updated_at=2024_10_28T18_34_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=default-k8s-diff-port-692033 minikube.k8s.io/primary=true
	I1028 18:34:01.875265   67489 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:02.050422   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:02.550824   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.050477   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:03.551245   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.051177   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:04.550572   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.051071   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.550926   67489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:05.638447   67489 kubeadm.go:1113] duration metric: took 3.795598924s to wait for elevateKubeSystemPrivileges
	I1028 18:34:05.638483   67489 kubeadm.go:394] duration metric: took 4m59.162037455s to StartCluster
	I1028 18:34:05.638504   67489 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.638591   67489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:05.641196   67489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:05.641497   67489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:05.641626   67489 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:05.641720   67489 config.go:182] Loaded profile config "default-k8s-diff-port-692033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:05.641730   67489 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641748   67489 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641760   67489 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:05.641776   67489 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641781   67489 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-692033"
	I1028 18:34:05.641792   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.641794   67489 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.641803   67489 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:05.641804   67489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-692033"
	I1028 18:34:05.641832   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.642210   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642217   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642229   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.642245   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642255   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642314   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.642905   67489 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:05.644361   67489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:05.658478   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I1028 18:34:05.658586   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1028 18:34:05.659040   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659044   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.659524   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659546   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659701   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.659724   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.659879   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660044   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.660111   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.660610   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.660648   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.661748   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1028 18:34:05.662150   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.662607   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.662627   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.662983   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.662991   67489 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-692033"
	W1028 18:34:05.663006   67489 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:05.663029   67489 host.go:66] Checking if "default-k8s-diff-port-692033" exists ...
	I1028 18:34:05.663294   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663334   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.663531   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.663572   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.675955   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1028 18:34:05.676345   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.676784   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.676802   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.677154   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.677358   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.678723   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I1028 18:34:05.678897   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1028 18:34:05.679025   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.679243   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679337   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.679700   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679715   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.679805   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.679823   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.680500   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680506   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.680706   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.680834   67489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:05.681042   67489 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:05.681070   67489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:05.681982   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:05.682005   67489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:05.682035   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.682363   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.683806   67489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:05.684992   67489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.685011   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:05.685029   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.686903   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.686957   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.686973   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.687218   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.687429   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.687693   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.687850   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.688516   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.688908   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.688933   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.689193   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.689372   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.689513   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.689655   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.696743   67489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I1028 18:34:05.697029   67489 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:05.697432   67489 main.go:141] libmachine: Using API Version  1
	I1028 18:34:05.697458   67489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:05.697697   67489 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:05.697843   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetState
	I1028 18:34:05.699192   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .DriverName
	I1028 18:34:05.699397   67489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.699405   67489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:05.699416   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHHostname
	I1028 18:34:05.702897   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703341   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:53:89", ip: ""} in network mk-default-k8s-diff-port-692033: {Iface:virbr1 ExpiryTime:2024-10-28 19:28:52 +0000 UTC Type:0 Mac:52:54:00:89:53:89 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-692033 Clientid:01:52:54:00:89:53:89}
	I1028 18:34:05.703368   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | domain default-k8s-diff-port-692033 has defined IP address 192.168.39.215 and MAC address 52:54:00:89:53:89 in network mk-default-k8s-diff-port-692033
	I1028 18:34:05.703483   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHPort
	I1028 18:34:05.703667   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHKeyPath
	I1028 18:34:05.703841   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .GetSSHUsername
	I1028 18:34:05.703996   67489 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/default-k8s-diff-port-692033/id_rsa Username:docker}
	I1028 18:34:05.838049   67489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:05.857829   67489 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866141   67489 node_ready.go:49] node "default-k8s-diff-port-692033" has status "Ready":"True"
	I1028 18:34:05.866158   67489 node_ready.go:38] duration metric: took 8.296617ms for node "default-k8s-diff-port-692033" to be "Ready" ...
	I1028 18:34:05.866167   67489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:05.873027   67489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:05.927585   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:05.927608   67489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:05.928743   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:05.946390   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:05.961712   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:05.961734   67489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:05.993688   67489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:05.993711   67489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:06.097871   67489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:06.696189   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696226   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696195   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696300   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696696   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696713   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696697   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696721   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.696700   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:06.696735   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.696742   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696750   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696722   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.696794   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.696984   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697000   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.697027   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.697036   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:06.720324   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:06.720346   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:06.720649   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:06.720668   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262166   67489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.164245646s)
	I1028 18:34:07.262256   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262277   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262587   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262608   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262607   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262616   67489 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:07.262625   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) Calling .Close
	I1028 18:34:07.262890   67489 main.go:141] libmachine: (default-k8s-diff-port-692033) DBG | Closing plugin on server side
	I1028 18:34:07.262923   67489 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:07.262936   67489 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:07.262948   67489 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-692033"
	I1028 18:34:07.264414   67489 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:07.265449   67489 addons.go:510] duration metric: took 1.623834435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:07.882264   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.313629   66600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.206119005s)
	I1028 18:34:14.313702   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:14.329212   66600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 18:34:14.339407   66600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:14.349645   66600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:14.349669   66600 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:14.349716   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:14.359332   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:14.359384   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:14.369627   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:14.381040   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:14.381098   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:14.390359   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.399743   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:14.399783   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:14.408932   66600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:14.417840   66600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:14.417876   66600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:14.427234   66600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:14.472502   66600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 18:34:14.472593   66600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:14.578311   66600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:14.578456   66600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:14.578576   66600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 18:34:14.586748   66600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:10.380304   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:12.878632   67489 pod_ready.go:103] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:14.878951   67489 pod_ready.go:93] pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:14.878974   67489 pod_ready.go:82] duration metric: took 9.005915421s for pod "etcd-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:14.878983   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385215   67489 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.385239   67489 pod_ready.go:82] duration metric: took 506.249352ms for pod "kube-apiserver-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.385250   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390412   67489 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.390435   67489 pod_ready.go:82] duration metric: took 5.177559ms for pod "kube-controller-manager-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.390448   67489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395252   67489 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:15.395272   67489 pod_ready.go:82] duration metric: took 4.816812ms for pod "kube-scheduler-default-k8s-diff-port-692033" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:15.395281   67489 pod_ready.go:39] duration metric: took 9.52910413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:15.395298   67489 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:15.395349   67489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:15.413693   67489 api_server.go:72] duration metric: took 9.772160727s to wait for apiserver process to appear ...
	I1028 18:34:15.413715   67489 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:15.413734   67489 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I1028 18:34:15.417780   67489 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I1028 18:34:15.418688   67489 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:15.418712   67489 api_server.go:131] duration metric: took 4.989226ms to wait for apiserver health ...
	I1028 18:34:15.418720   67489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:15.424285   67489 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:15.424306   67489 system_pods.go:61] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.424310   67489 system_pods.go:61] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.424315   67489 system_pods.go:61] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.424318   67489 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.424323   67489 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.424327   67489 system_pods.go:61] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.424331   67489 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.424337   67489 system_pods.go:61] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.424344   67489 system_pods.go:61] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.424351   67489 system_pods.go:74] duration metric: took 5.625205ms to wait for pod list to return data ...
	I1028 18:34:15.424359   67489 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:15.427132   67489 default_sa.go:45] found service account: "default"
	I1028 18:34:15.427153   67489 default_sa.go:55] duration metric: took 2.788005ms for default service account to be created ...
	I1028 18:34:15.427161   67489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:15.479404   67489 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:15.479427   67489 system_pods.go:89] "coredns-7c65d6cfc9-25sf7" [c4e4eda2-a141-4111-b71b-ae8efd6e250f] Running
	I1028 18:34:15.479433   67489 system_pods.go:89] "coredns-7c65d6cfc9-rhvmm" [41008126-560b-4c8e-b110-4a180c56ab0b] Running
	I1028 18:34:15.479436   67489 system_pods.go:89] "etcd-default-k8s-diff-port-692033" [9d0e0fbc-e7e7-4e2c-a29c-2f36ef6753f2] Running
	I1028 18:34:15.479443   67489 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-692033" [030603a0-9929-4f82-9759-1f3bca356b41] Running
	I1028 18:34:15.479448   67489 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-692033" [27e35461-262f-44b6-9a3e-aa440072883e] Running
	I1028 18:34:15.479453   67489 system_pods.go:89] "kube-proxy-b56jx" [4c73611b-f055-4fa4-9665-f73469c6e236] Running
	I1028 18:34:15.479460   67489 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-692033" [e507344c-9af2-411c-9539-605038d52761] Running
	I1028 18:34:15.479472   67489 system_pods.go:89] "metrics-server-6867b74b74-8vz62" [b6498143-8e21-4f11-9d29-e20964e74203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:15.479477   67489 system_pods.go:89] "storage-provisioner" [1021f60d-1944-4f55-a4d9-1a8f8a3ae0df] Running
	I1028 18:34:15.479491   67489 system_pods.go:126] duration metric: took 52.324012ms to wait for k8s-apps to be running ...
	I1028 18:34:15.479502   67489 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:15.479548   67489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:15.493743   67489 system_svc.go:56] duration metric: took 14.233947ms WaitForService to wait for kubelet
	I1028 18:34:15.493772   67489 kubeadm.go:582] duration metric: took 9.852243286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:15.493796   67489 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:15.677127   67489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:15.677149   67489 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:15.677156   67489 node_conditions.go:105] duration metric: took 183.355591ms to run NodePressure ...
	I1028 18:34:15.677167   67489 start.go:241] waiting for startup goroutines ...
	I1028 18:34:15.677174   67489 start.go:246] waiting for cluster config update ...
	I1028 18:34:15.677183   67489 start.go:255] writing updated cluster config ...
	I1028 18:34:15.677419   67489 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:15.731157   67489 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:15.732912   67489 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-692033" cluster and "default" namespace by default
	I1028 18:34:14.588528   66600 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:14.588660   66600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:14.588749   66600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:14.588886   66600 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:14.588985   66600 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:14.589089   66600 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:14.589179   66600 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:14.589268   66600 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:14.589362   66600 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:14.589472   66600 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:14.589575   66600 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:14.589638   66600 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:14.589739   66600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:14.902456   66600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:15.107236   66600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 18:34:15.198073   66600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:15.618175   66600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:15.804761   66600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:15.805675   66600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:15.809860   66600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:15.811538   66600 out.go:235]   - Booting up control plane ...
	I1028 18:34:15.811658   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:15.811761   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:15.812969   66600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:15.838182   66600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:15.846044   66600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:15.846126   66600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:15.981748   66600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 18:34:15.981899   66600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 18:34:16.483112   66600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262752ms
	I1028 18:34:16.483242   66600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 18:34:21.484655   66600 kubeadm.go:310] [api-check] The API server is healthy after 5.001327308s
	I1028 18:34:21.498067   66600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 18:34:21.508713   66600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 18:34:21.537520   66600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 18:34:21.537724   66600 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-021370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 18:34:21.551416   66600 kubeadm.go:310] [bootstrap-token] Using token: c2otm2.eh2uwearn2r38epe
	I1028 18:34:21.552613   66600 out.go:235]   - Configuring RBAC rules ...
	I1028 18:34:21.552721   66600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 18:34:21.556871   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 18:34:21.563570   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 18:34:21.566336   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 18:34:21.569226   66600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 18:34:21.575090   66600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 18:34:21.890874   66600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 18:34:22.315363   66600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 18:34:22.892050   66600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 18:34:22.892097   66600 kubeadm.go:310] 
	I1028 18:34:22.892198   66600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 18:34:22.892214   66600 kubeadm.go:310] 
	I1028 18:34:22.892297   66600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 18:34:22.892308   66600 kubeadm.go:310] 
	I1028 18:34:22.892346   66600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 18:34:22.892457   66600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 18:34:22.892549   66600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 18:34:22.892559   66600 kubeadm.go:310] 
	I1028 18:34:22.892628   66600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 18:34:22.892643   66600 kubeadm.go:310] 
	I1028 18:34:22.892705   66600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 18:34:22.892715   66600 kubeadm.go:310] 
	I1028 18:34:22.892784   66600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 18:34:22.892851   66600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 18:34:22.892958   66600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 18:34:22.892981   66600 kubeadm.go:310] 
	I1028 18:34:22.893093   66600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 18:34:22.893197   66600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 18:34:22.893212   66600 kubeadm.go:310] 
	I1028 18:34:22.893320   66600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893460   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 \
	I1028 18:34:22.893506   66600 kubeadm.go:310] 	--control-plane 
	I1028 18:34:22.893515   66600 kubeadm.go:310] 
	I1028 18:34:22.893622   66600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 18:34:22.893631   66600 kubeadm.go:310] 
	I1028 18:34:22.893728   66600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2otm2.eh2uwearn2r38epe \
	I1028 18:34:22.893886   66600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c3dc27df991a8b98e77979955a809cc57f592e084452492b4d92daefc54c9a3 
	I1028 18:34:22.894813   66600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:34:22.895022   66600 cni.go:84] Creating CNI manager for ""
	I1028 18:34:22.895037   66600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 18:34:22.897376   66600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 18:34:22.898532   66600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 18:34:22.909363   66600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 18:34:22.930151   66600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 18:34:22.930190   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:22.930280   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-021370 minikube.k8s.io/updated_at=2024_10_28T18_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8 minikube.k8s.io/name=embed-certs-021370 minikube.k8s.io/primary=true
	I1028 18:34:22.963249   66600 ops.go:34] apiserver oom_adj: -16
	I1028 18:34:23.216574   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:23.717592   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.217674   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:24.717602   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.216832   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:25.717673   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.217668   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:26.716727   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.217476   66600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 18:34:27.343171   66600 kubeadm.go:1113] duration metric: took 4.413029537s to wait for elevateKubeSystemPrivileges
	I1028 18:34:27.343201   66600 kubeadm.go:394] duration metric: took 5m1.603783417s to StartCluster
	I1028 18:34:27.343221   66600 settings.go:142] acquiring lock: {Name:mk4b3807b20ad17d3fb9665ed739115d4b5155ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.343302   66600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:34:27.344913   66600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-13443/kubeconfig: {Name:mk7d93b867c9c3bab1dd3f742ea085e8c6e7979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 18:34:27.345149   66600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 18:34:27.345210   66600 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 18:34:27.345282   66600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-021370"
	I1028 18:34:27.345297   66600 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-021370"
	W1028 18:34:27.345304   66600 addons.go:243] addon storage-provisioner should already be in state true
	I1028 18:34:27.345310   66600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-021370"
	I1028 18:34:27.345339   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345337   66600 addons.go:69] Setting metrics-server=true in profile "embed-certs-021370"
	I1028 18:34:27.345353   66600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-021370"
	I1028 18:34:27.345360   66600 addons.go:234] Setting addon metrics-server=true in "embed-certs-021370"
	W1028 18:34:27.345369   66600 addons.go:243] addon metrics-server should already be in state true
	I1028 18:34:27.345381   66600 config.go:182] Loaded profile config "embed-certs-021370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 18:34:27.345396   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.345742   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345766   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.345788   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345794   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.345798   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.346770   66600 out.go:177] * Verifying Kubernetes components...
	I1028 18:34:27.348169   66600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 18:34:27.361310   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1028 18:34:27.361763   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362073   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 18:34:27.362257   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.362292   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.362550   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.362640   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363049   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.363079   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.363204   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.363242   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.363425   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.363610   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.363934   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1028 18:34:27.364390   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.364865   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.364885   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.365229   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.365805   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.365852   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.367292   66600 addons.go:234] Setting addon default-storageclass=true in "embed-certs-021370"
	W1028 18:34:27.367314   66600 addons.go:243] addon default-storageclass should already be in state true
	I1028 18:34:27.367347   66600 host.go:66] Checking if "embed-certs-021370" exists ...
	I1028 18:34:27.367738   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.367782   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.381375   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1028 18:34:27.381846   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.382429   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.382441   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.382787   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.382926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.382965   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I1028 18:34:27.383568   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.384121   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.384134   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.384530   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.384730   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.384815   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386107   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I1028 18:34:27.386306   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.386435   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.386888   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.386911   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.386977   66600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 18:34:27.387284   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.387866   66600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 18:34:27.387883   66600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19872-13443/.minikube/bin/docker-machine-driver-kvm2
	I1028 18:34:27.388259   66600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 18:34:27.388628   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 18:34:27.388645   66600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 18:34:27.388658   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.390614   66600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.390634   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 18:34:27.390650   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.393252   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393734   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.393758   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.393926   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.394122   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.394238   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.394364   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.394640   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395084   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.395110   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.395201   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.395383   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.395540   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.395677   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.406551   66600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1028 18:34:27.406907   66600 main.go:141] libmachine: () Calling .GetVersion
	I1028 18:34:27.407358   66600 main.go:141] libmachine: Using API Version  1
	I1028 18:34:27.407376   66600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 18:34:27.407699   66600 main.go:141] libmachine: () Calling .GetMachineName
	I1028 18:34:27.407891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetState
	I1028 18:34:27.409287   66600 main.go:141] libmachine: (embed-certs-021370) Calling .DriverName
	I1028 18:34:27.409489   66600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.409502   66600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 18:34:27.409517   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHHostname
	I1028 18:34:27.412275   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412828   66600 main.go:141] libmachine: (embed-certs-021370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:5a:fa", ip: ""} in network mk-embed-certs-021370: {Iface:virbr2 ExpiryTime:2024-10-28 19:29:12 +0000 UTC Type:0 Mac:52:54:00:2e:5a:fa Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-021370 Clientid:01:52:54:00:2e:5a:fa}
	I1028 18:34:27.412858   66600 main.go:141] libmachine: (embed-certs-021370) DBG | domain embed-certs-021370 has defined IP address 192.168.50.62 and MAC address 52:54:00:2e:5a:fa in network mk-embed-certs-021370
	I1028 18:34:27.412984   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHPort
	I1028 18:34:27.413162   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHKeyPath
	I1028 18:34:27.413303   66600 main.go:141] libmachine: (embed-certs-021370) Calling .GetSSHUsername
	I1028 18:34:27.413453   66600 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/embed-certs-021370/id_rsa Username:docker}
	I1028 18:34:27.546891   66600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 18:34:27.571837   66600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595105   66600 node_ready.go:49] node "embed-certs-021370" has status "Ready":"True"
	I1028 18:34:27.595127   66600 node_ready.go:38] duration metric: took 23.255834ms for node "embed-certs-021370" to be "Ready" ...
	I1028 18:34:27.595156   66600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:27.603107   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:27.635422   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 18:34:27.657051   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 18:34:27.666085   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 18:34:27.666110   66600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 18:34:27.706366   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 18:34:27.706394   66600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 18:34:27.772162   66600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:27.772191   66600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 18:34:27.844116   66600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 18:34:28.411454   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411478   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411522   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411544   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.411751   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.411960   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.411982   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.411991   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.411998   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.412223   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.412266   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413310   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413326   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.413338   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.413344   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.413569   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.413584   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.420867   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.420891   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.421092   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.421168   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.421169   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957337   66600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.11317187s)
	I1028 18:34:28.957385   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957395   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957696   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957715   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957725   66600 main.go:141] libmachine: Making call to close driver server
	I1028 18:34:28.957733   66600 main.go:141] libmachine: (embed-certs-021370) Calling .Close
	I1028 18:34:28.957957   66600 main.go:141] libmachine: Successfully made call to close driver server
	I1028 18:34:28.957970   66600 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 18:34:28.957988   66600 main.go:141] libmachine: (embed-certs-021370) DBG | Closing plugin on server side
	I1028 18:34:28.957990   66600 addons.go:475] Verifying addon metrics-server=true in "embed-certs-021370"
	I1028 18:34:28.959590   66600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 18:34:28.961127   66600 addons.go:510] duration metric: took 1.615922156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 18:34:29.611126   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:32.110577   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:34.610544   66600 pod_ready.go:103] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"False"
	I1028 18:34:37.111319   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.111342   66600 pod_ready.go:82] duration metric: took 9.508204126s for pod "coredns-7c65d6cfc9-d5pk8" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.111351   66600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119547   66600 pod_ready.go:93] pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.119571   66600 pod_ready.go:82] duration metric: took 8.212577ms for pod "coredns-7c65d6cfc9-qw5gl" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.119581   66600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126030   66600 pod_ready.go:93] pod "etcd-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.126048   66600 pod_ready.go:82] duration metric: took 6.46043ms for pod "etcd-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.126056   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132366   66600 pod_ready.go:93] pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.132386   66600 pod_ready.go:82] duration metric: took 6.323715ms for pod "kube-apiserver-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.132394   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137151   66600 pod_ready.go:93] pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.137171   66600 pod_ready.go:82] duration metric: took 4.770272ms for pod "kube-controller-manager-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.137182   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507159   66600 pod_ready.go:93] pod "kube-proxy-nrr6g" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.507180   66600 pod_ready.go:82] duration metric: took 369.991591ms for pod "kube-proxy-nrr6g" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.507189   66600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908006   66600 pod_ready.go:93] pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace has status "Ready":"True"
	I1028 18:34:37.908030   66600 pod_ready.go:82] duration metric: took 400.834669ms for pod "kube-scheduler-embed-certs-021370" in "kube-system" namespace to be "Ready" ...
	I1028 18:34:37.908038   66600 pod_ready.go:39] duration metric: took 10.312872321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 18:34:37.908052   66600 api_server.go:52] waiting for apiserver process to appear ...
	I1028 18:34:37.908098   66600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 18:34:37.924515   66600 api_server.go:72] duration metric: took 10.579335154s to wait for apiserver process to appear ...
	I1028 18:34:37.924552   66600 api_server.go:88] waiting for apiserver healthz status ...
	I1028 18:34:37.924572   66600 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I1028 18:34:37.929438   66600 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I1028 18:34:37.930716   66600 api_server.go:141] control plane version: v1.31.2
	I1028 18:34:37.930742   66600 api_server.go:131] duration metric: took 6.181503ms to wait for apiserver health ...
	I1028 18:34:37.930752   66600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 18:34:38.113401   66600 system_pods.go:59] 9 kube-system pods found
	I1028 18:34:38.113430   66600 system_pods.go:61] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.113435   66600 system_pods.go:61] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.113439   66600 system_pods.go:61] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.113442   66600 system_pods.go:61] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.113446   66600 system_pods.go:61] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.113449   66600 system_pods.go:61] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.113452   66600 system_pods.go:61] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.113457   66600 system_pods.go:61] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.113462   66600 system_pods.go:61] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.113468   66600 system_pods.go:74] duration metric: took 182.711396ms to wait for pod list to return data ...
	I1028 18:34:38.113475   66600 default_sa.go:34] waiting for default service account to be created ...
	I1028 18:34:38.309139   66600 default_sa.go:45] found service account: "default"
	I1028 18:34:38.309170   66600 default_sa.go:55] duration metric: took 195.688587ms for default service account to be created ...
	I1028 18:34:38.309182   66600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 18:34:38.510307   66600 system_pods.go:86] 9 kube-system pods found
	I1028 18:34:38.510336   66600 system_pods.go:89] "coredns-7c65d6cfc9-d5pk8" [237c2887-86c5-485e-9548-49f0cb407435] Running
	I1028 18:34:38.510341   66600 system_pods.go:89] "coredns-7c65d6cfc9-qw5gl" [605aa4fe-2ed4-4246-a087-614d56c64c4f] Running
	I1028 18:34:38.510345   66600 system_pods.go:89] "etcd-embed-certs-021370" [71d1a2ed-7cbc-44ef-b62a-c49130a33915] Running
	I1028 18:34:38.510349   66600 system_pods.go:89] "kube-apiserver-embed-certs-021370" [6c7232c0-01dc-4c78-8f86-e403d395092e] Running
	I1028 18:34:38.510352   66600 system_pods.go:89] "kube-controller-manager-embed-certs-021370" [3487298e-08f9-4d48-a8ec-737be547019f] Running
	I1028 18:34:38.510355   66600 system_pods.go:89] "kube-proxy-nrr6g" [12cffbf7-943e-4853-9197-d4275a479d5d] Running
	I1028 18:34:38.510360   66600 system_pods.go:89] "kube-scheduler-embed-certs-021370" [67633aca-3f5b-4a0c-b592-8ada4a94b2ab] Running
	I1028 18:34:38.510368   66600 system_pods.go:89] "metrics-server-6867b74b74-hpwrm" [224f97d8-b44f-4392-a46b-c134004c061a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 18:34:38.510376   66600 system_pods.go:89] "storage-provisioner" [2e0ac8ad-5ba0-47a0-8613-7a6fba893f06] Running
	I1028 18:34:38.510391   66600 system_pods.go:126] duration metric: took 201.199416ms to wait for k8s-apps to be running ...
	I1028 18:34:38.510403   66600 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 18:34:38.510448   66600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:38.526043   66600 system_svc.go:56] duration metric: took 15.628796ms WaitForService to wait for kubelet
	I1028 18:34:38.526075   66600 kubeadm.go:582] duration metric: took 11.18089878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 18:34:38.526109   66600 node_conditions.go:102] verifying NodePressure condition ...
	I1028 18:34:38.707568   66600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 18:34:38.707594   66600 node_conditions.go:123] node cpu capacity is 2
	I1028 18:34:38.707604   66600 node_conditions.go:105] duration metric: took 181.491056ms to run NodePressure ...
	I1028 18:34:38.707615   66600 start.go:241] waiting for startup goroutines ...
	I1028 18:34:38.707621   66600 start.go:246] waiting for cluster config update ...
	I1028 18:34:38.707631   66600 start.go:255] writing updated cluster config ...
	I1028 18:34:38.707950   66600 ssh_runner.go:195] Run: rm -f paused
	I1028 18:34:38.755355   66600 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 18:34:38.757256   66600 out.go:177] * Done! kubectl is now configured to use "embed-certs-021370" cluster and "default" namespace by default
	I1028 18:34:49.381931   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:34:49.382111   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:34:49.383570   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:34:49.383633   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:34:49.383732   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:34:49.383859   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:34:49.383975   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:34:49.384073   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:34:49.385654   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:34:49.385757   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:34:49.385847   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:34:49.385937   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:34:49.386008   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:34:49.386118   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:34:49.386214   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:34:49.386316   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:34:49.386391   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:34:49.386478   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:34:49.386597   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:34:49.386643   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:34:49.386724   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:34:49.386813   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:34:49.386891   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:34:49.386983   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:34:49.387070   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:34:49.387209   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:34:49.387330   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:34:49.387389   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:34:49.387474   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:34:49.389653   67149 out.go:235]   - Booting up control plane ...
	I1028 18:34:49.389760   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:34:49.389867   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:34:49.389971   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:34:49.390088   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:34:49.390228   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:34:49.390277   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:34:49.390355   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390550   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390645   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.390832   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.390903   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391069   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391163   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391354   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391452   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:34:49.391649   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:34:49.391657   67149 kubeadm.go:310] 
	I1028 18:34:49.391691   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:34:49.391743   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:34:49.391758   67149 kubeadm.go:310] 
	I1028 18:34:49.391789   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:34:49.391822   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:34:49.391908   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:34:49.391914   67149 kubeadm.go:310] 
	I1028 18:34:49.392024   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:34:49.392073   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:34:49.392133   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:34:49.392142   67149 kubeadm.go:310] 
	I1028 18:34:49.392267   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:34:49.392363   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:34:49.392380   67149 kubeadm.go:310] 
	I1028 18:34:49.392525   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:34:49.392629   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:34:49.392737   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:34:49.392830   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:34:49.392879   67149 kubeadm.go:310] 
	W1028 18:34:49.392949   67149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 18:34:49.392991   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 18:34:49.869859   67149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 18:34:49.884524   67149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 18:34:49.896293   67149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 18:34:49.896318   67149 kubeadm.go:157] found existing configuration files:
	
	I1028 18:34:49.896354   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 18:34:49.907312   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 18:34:49.907364   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 18:34:49.917926   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 18:34:49.928001   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 18:34:49.928048   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 18:34:49.938687   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.949217   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 18:34:49.949268   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 18:34:49.959955   67149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 18:34:49.970105   67149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 18:34:49.970156   67149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 18:34:49.980760   67149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 18:34:50.212973   67149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 18:36:46.686631   67149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 18:36:46.686753   67149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 18:36:46.688224   67149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 18:36:46.688325   67149 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 18:36:46.688449   67149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 18:36:46.688587   67149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 18:36:46.688726   67149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 18:36:46.688813   67149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 18:36:46.690320   67149 out.go:235]   - Generating certificates and keys ...
	I1028 18:36:46.690427   67149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 18:36:46.690524   67149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 18:36:46.690627   67149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 18:36:46.690720   67149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 18:36:46.690824   67149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 18:36:46.690897   67149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 18:36:46.690984   67149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 18:36:46.691064   67149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 18:36:46.691161   67149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 18:36:46.691253   67149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 18:36:46.691309   67149 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 18:36:46.691379   67149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 18:36:46.691426   67149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 18:36:46.691471   67149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 18:36:46.691547   67149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 18:36:46.691619   67149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 18:36:46.691713   67149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 18:36:46.691814   67149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 18:36:46.691864   67149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 18:36:46.691951   67149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 18:36:46.693258   67149 out.go:235]   - Booting up control plane ...
	I1028 18:36:46.693374   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 18:36:46.693471   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 18:36:46.693566   67149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 18:36:46.693682   67149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 18:36:46.693870   67149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 18:36:46.693930   67149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 18:36:46.694023   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694253   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694343   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694527   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694614   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.694798   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.694894   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695053   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695119   67149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 18:36:46.695315   67149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 18:36:46.695324   67149 kubeadm.go:310] 
	I1028 18:36:46.695357   67149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 18:36:46.695392   67149 kubeadm.go:310] 		timed out waiting for the condition
	I1028 18:36:46.695398   67149 kubeadm.go:310] 
	I1028 18:36:46.695427   67149 kubeadm.go:310] 	This error is likely caused by:
	I1028 18:36:46.695456   67149 kubeadm.go:310] 		- The kubelet is not running
	I1028 18:36:46.695542   67149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 18:36:46.695549   67149 kubeadm.go:310] 
	I1028 18:36:46.695665   67149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 18:36:46.695717   67149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 18:36:46.695767   67149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 18:36:46.695781   67149 kubeadm.go:310] 
	I1028 18:36:46.695921   67149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 18:36:46.696037   67149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 18:36:46.696048   67149 kubeadm.go:310] 
	I1028 18:36:46.696177   67149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 18:36:46.696285   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 18:36:46.696390   67149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 18:36:46.696512   67149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 18:36:46.696560   67149 kubeadm.go:310] 
	I1028 18:36:46.696579   67149 kubeadm.go:394] duration metric: took 7m56.601380499s to StartCluster
	I1028 18:36:46.696618   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 18:36:46.696670   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 18:36:46.738714   67149 cri.go:89] found id: ""
	I1028 18:36:46.738741   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.738749   67149 logs.go:284] No container was found matching "kube-apiserver"
	I1028 18:36:46.738757   67149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 18:36:46.738822   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 18:36:46.772906   67149 cri.go:89] found id: ""
	I1028 18:36:46.772934   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.772944   67149 logs.go:284] No container was found matching "etcd"
	I1028 18:36:46.772951   67149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 18:36:46.773028   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 18:36:46.808785   67149 cri.go:89] found id: ""
	I1028 18:36:46.808809   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.808819   67149 logs.go:284] No container was found matching "coredns"
	I1028 18:36:46.808827   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 18:36:46.808884   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 18:36:46.842977   67149 cri.go:89] found id: ""
	I1028 18:36:46.843007   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.843016   67149 logs.go:284] No container was found matching "kube-scheduler"
	I1028 18:36:46.843022   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 18:36:46.843095   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 18:36:46.878121   67149 cri.go:89] found id: ""
	I1028 18:36:46.878148   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.878159   67149 logs.go:284] No container was found matching "kube-proxy"
	I1028 18:36:46.878166   67149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 18:36:46.878231   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 18:36:46.911953   67149 cri.go:89] found id: ""
	I1028 18:36:46.911977   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.911984   67149 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 18:36:46.911990   67149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 18:36:46.912054   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 18:36:46.944291   67149 cri.go:89] found id: ""
	I1028 18:36:46.944317   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.944324   67149 logs.go:284] No container was found matching "kindnet"
	I1028 18:36:46.944329   67149 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 18:36:46.944379   67149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 18:36:46.976525   67149 cri.go:89] found id: ""
	I1028 18:36:46.976554   67149 logs.go:282] 0 containers: []
	W1028 18:36:46.976564   67149 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 18:36:46.976575   67149 logs.go:123] Gathering logs for kubelet ...
	I1028 18:36:46.976588   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 18:36:47.026517   67149 logs.go:123] Gathering logs for dmesg ...
	I1028 18:36:47.026544   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 18:36:47.041198   67149 logs.go:123] Gathering logs for describe nodes ...
	I1028 18:36:47.041231   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 18:36:47.115650   67149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 18:36:47.115681   67149 logs.go:123] Gathering logs for CRI-O ...
	I1028 18:36:47.115695   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 18:36:47.218059   67149 logs.go:123] Gathering logs for container status ...
	I1028 18:36:47.218093   67149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 18:36:47.257114   67149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 18:36:47.257182   67149 out.go:270] * 
	W1028 18:36:47.257240   67149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.257280   67149 out.go:270] * 
	W1028 18:36:47.258088   67149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 18:36:47.261521   67149 out.go:201] 
	W1028 18:36:47.262707   67149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 18:36:47.262742   67149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 18:36:47.262760   67149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 18:36:47.264073   67149 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.354507448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141288354488718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29cc1c53-057f-47b7-a655-c8456315bcb4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.355085308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d101b00-ac52-4415-b83e-9e1e2ea39e58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.355138081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d101b00-ac52-4415-b83e-9e1e2ea39e58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.355167138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d101b00-ac52-4415-b83e-9e1e2ea39e58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.384503066Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52296a40-8ace-4ec3-bb15-7ee7f681f69e name=/runtime.v1.RuntimeService/Version
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.384575800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52296a40-8ace-4ec3-bb15-7ee7f681f69e name=/runtime.v1.RuntimeService/Version
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.385982817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d96edc10-045e-4a82-9f55-462a6810a9b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.386370266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141288386347489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d96edc10-045e-4a82-9f55-462a6810a9b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.386898802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eebceca1-6afd-4cbf-8729-48a45a0bf2e0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.386950119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eebceca1-6afd-4cbf-8729-48a45a0bf2e0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.386983661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eebceca1-6afd-4cbf-8729-48a45a0bf2e0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.421478795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8f9a0e6-ffa9-4b28-9f57-507c8f3f9f65 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.421575358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8f9a0e6-ffa9-4b28-9f57-507c8f3f9f65 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.422802584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43c31bf5-25a6-49ac-83cb-4ededaa8e1f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.423200719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141288423176521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43c31bf5-25a6-49ac-83cb-4ededaa8e1f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.423895295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb81e078-9c9e-40f5-9a0d-f3d88b48b81a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.423944334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb81e078-9c9e-40f5-9a0d-f3d88b48b81a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.424018352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eb81e078-9c9e-40f5-9a0d-f3d88b48b81a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.460603945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bae43704-56f9-4c6f-8cc4-8b1aa4dd05c2 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.460723993Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bae43704-56f9-4c6f-8cc4-8b1aa4dd05c2 name=/runtime.v1.RuntimeService/Version
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.461883784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8cc8789-5161-4a9a-8f17-2b209bf8f4f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.462254410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730141288462236392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8cc8789-5161-4a9a-8f17-2b209bf8f4f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.463200350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d496671-d90b-4d9c-94fd-52dfb76aa67f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.463256356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d496671-d90b-4d9c-94fd-52dfb76aa67f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 18:48:08 old-k8s-version-223868 crio[633]: time="2024-10-28 18:48:08.463292728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9d496671-d90b-4d9c-94fd-52dfb76aa67f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 18:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052154] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040854] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.948848] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.654628] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568759] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.229575] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.078716] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057084] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.217028] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.132211] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.266373] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +7.871428] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.072119] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.097659] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[Oct28 18:29] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 18:32] systemd-fstab-generator[5063]: Ignoring "noauto" option for root device
	[Oct28 18:34] systemd-fstab-generator[5342]: Ignoring "noauto" option for root device
	[  +0.070292] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:48:08 up 19 min,  0 users,  load average: 0.15, 0.06, 0.05
	Linux old-k8s-version-223868 5.10.207 #1 SMP Mon Oct 28 15:05:56 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0009eaae0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0009aeae0, 0x24, 0x0, ...)
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]: net.(*Dialer).DialContext(0xc0003ec060, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009aeae0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0006cf6c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009aeae0, 0x24, 0x60, 0x7f44f7590880, 0x118, ...)
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]: net/http.(*Transport).dial(0xc000af6f00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009aeae0, 0x24, 0x0, 0x14, 0x5, ...)
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]: net/http.(*Transport).dialConn(0xc000af6f00, 0x4f7fe00, 0xc000120018, 0x0, 0xc0009b11a0, 0x5, 0xc0009aeae0, 0x24, 0x0, 0xc00089fd40, ...)
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]: net/http.(*Transport).dialConnFor(0xc000af6f00, 0xc0009fe000)
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]: created by net/http.(*Transport).queueForDial
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6823]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 28 18:48:05 old-k8s-version-223868 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 18:48:05 old-k8s-version-223868 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 18:48:05 old-k8s-version-223868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 138.
	Oct 28 18:48:05 old-k8s-version-223868 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 18:48:05 old-k8s-version-223868 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6832]: I1028 18:48:05.879906    6832 server.go:416] Version: v1.20.0
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6832]: I1028 18:48:05.880278    6832 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6832]: I1028 18:48:05.882701    6832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6832]: W1028 18:48:05.883751    6832 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 28 18:48:05 old-k8s-version-223868 kubelet[6832]: I1028 18:48:05.883946    6832 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 2 (216.203649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-223868" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (136.20s)

                                                
                                    

Test pass (243/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 42.23
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 21.69
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 94.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 204.21
31 TestAddons/serial/GCPAuth/Namespaces 2.79
32 TestAddons/serial/GCPAuth/FakeCredentials 13.52
35 TestAddons/parallel/Registry 20.27
37 TestAddons/parallel/InspektorGadget 10.91
40 TestAddons/parallel/CSI 60.99
41 TestAddons/parallel/Headlamp 19.8
42 TestAddons/parallel/CloudSpanner 5.54
43 TestAddons/parallel/LocalPath 58.48
44 TestAddons/parallel/NvidiaDevicePlugin 6.75
45 TestAddons/parallel/Yakd 10.65
48 TestCertOptions 85.78
49 TestCertExpiration 282.17
51 TestForceSystemdFlag 59.28
52 TestForceSystemdEnv 83.94
54 TestKVMDriverInstallOrUpdate 12.47
58 TestErrorSpam/setup 38.76
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.73
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.7
63 TestErrorSpam/stop 5.03
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 89.06
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 55.56
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 2.67
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 31.33
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.36
86 TestFunctional/serial/LogsFileCmd 1.41
87 TestFunctional/serial/InvalidService 4.77
89 TestFunctional/parallel/ConfigCmd 0.29
90 TestFunctional/parallel/DashboardCmd 16.4
91 TestFunctional/parallel/DryRun 0.27
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 1.07
97 TestFunctional/parallel/ServiceCmdConnect 10.53
98 TestFunctional/parallel/AddonsCmd 0.12
99 TestFunctional/parallel/PersistentVolumeClaim 50.83
101 TestFunctional/parallel/SSHCmd 0.5
102 TestFunctional/parallel/CpCmd 1.32
103 TestFunctional/parallel/MySQL 28.81
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.24
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
113 TestFunctional/parallel/License 0.85
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.18
119 TestFunctional/parallel/ImageCommands/Setup 2.67
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.46
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.49
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.2
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.71
135 TestFunctional/parallel/ImageCommands/ImageRemove 1.53
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.99
137 TestFunctional/parallel/ServiceCmd/DeployApp 13.19
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.96
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
140 TestFunctional/parallel/ProfileCmd/profile_list 0.31
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
142 TestFunctional/parallel/MountCmd/any-port 15.47
143 TestFunctional/parallel/ServiceCmd/List 0.65
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
146 TestFunctional/parallel/ServiceCmd/Format 0.3
147 TestFunctional/parallel/ServiceCmd/URL 0.3
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
151 TestFunctional/parallel/MountCmd/specific-port 1.85
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 205.57
160 TestMultiControlPlane/serial/DeployApp 10.27
161 TestMultiControlPlane/serial/PingHostFromPods 1.17
162 TestMultiControlPlane/serial/AddWorkerNode 61.45
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
165 TestMultiControlPlane/serial/CopyFile 12.58
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.62
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
174 TestMultiControlPlane/serial/RestartCluster 343.8
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
176 TestMultiControlPlane/serial/AddSecondaryNode 82.07
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
181 TestJSONOutput/start/Command 83.1
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.62
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.34
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
209 TestMainNoArgs 0.04
210 TestMinikubeProfile 90.01
213 TestMountStart/serial/StartWithMountFirst 24.74
214 TestMountStart/serial/VerifyMountFirst 0.37
215 TestMountStart/serial/StartWithMountSecond 26.94
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.68
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.27
220 TestMountStart/serial/RestartStopped 22.89
221 TestMountStart/serial/VerifyMountPostStop 0.36
224 TestMultiNode/serial/FreshStart2Nodes 118.45
225 TestMultiNode/serial/DeployApp2Nodes 8.33
226 TestMultiNode/serial/PingHostFrom2Pods 0.76
227 TestMultiNode/serial/AddNode 55.9
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.54
230 TestMultiNode/serial/CopyFile 6.81
231 TestMultiNode/serial/StopNode 2.28
232 TestMultiNode/serial/StartAfterStop 41.06
234 TestMultiNode/serial/DeleteNode 2.14
236 TestMultiNode/serial/RestartMultiNode 180.44
237 TestMultiNode/serial/ValidateNameConflict 42.65
244 TestScheduledStopUnix 110.52
248 TestRunningBinaryUpgrade 183.93
258 TestStoppedBinaryUpgrade/Setup 3.74
260 TestStoppedBinaryUpgrade/Upgrade 174.34
262 TestPause/serial/Start 94.87
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
265 TestNoKubernetes/serial/StartWithK8s 52.99
267 TestNoKubernetes/serial/StartWithStopK8s 17.67
268 TestNoKubernetes/serial/Start 28.07
276 TestNetworkPlugins/group/false 3.11
280 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
282 TestNoKubernetes/serial/ProfileList 0.9
283 TestNoKubernetes/serial/Stop 1.28
284 TestNoKubernetes/serial/StartNoArgs 69.15
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
289 TestStartStop/group/no-preload/serial/FirstStart 107.06
291 TestStartStop/group/embed-certs/serial/FirstStart 74.39
292 TestStartStop/group/embed-certs/serial/DeployApp 13.28
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
295 TestStartStop/group/no-preload/serial/DeployApp 12.28
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.34
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.27
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
306 TestStartStop/group/embed-certs/serial/SecondStart 676
308 TestStartStop/group/no-preload/serial/SecondStart 599.4
309 TestStartStop/group/old-k8s-version/serial/Stop 3.28
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 520.9
323 TestStartStop/group/newest-cni/serial/FirstStart 45.27
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
326 TestStartStop/group/newest-cni/serial/Stop 10.32
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/newest-cni/serial/SecondStart 37.91
329 TestNetworkPlugins/group/auto/Start 98.12
330 TestNetworkPlugins/group/kindnet/Start 95.15
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
334 TestStartStop/group/newest-cni/serial/Pause 2.38
335 TestNetworkPlugins/group/flannel/Start 110.57
336 TestNetworkPlugins/group/auto/KubeletFlags 0.25
337 TestNetworkPlugins/group/auto/NetCatPod 11.25
338 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
340 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
341 TestNetworkPlugins/group/auto/DNS 0.18
342 TestNetworkPlugins/group/auto/Localhost 0.14
343 TestNetworkPlugins/group/auto/HairPin 0.13
344 TestNetworkPlugins/group/kindnet/DNS 0.19
345 TestNetworkPlugins/group/kindnet/Localhost 0.13
346 TestNetworkPlugins/group/kindnet/HairPin 0.15
347 TestNetworkPlugins/group/enable-default-cni/Start 81.44
348 TestNetworkPlugins/group/bridge/Start 83.17
349 TestNetworkPlugins/group/custom-flannel/Start 119.2
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
352 TestNetworkPlugins/group/flannel/NetCatPod 10.22
353 TestNetworkPlugins/group/flannel/DNS 0.17
354 TestNetworkPlugins/group/flannel/Localhost 0.14
355 TestNetworkPlugins/group/flannel/HairPin 0.42
356 TestNetworkPlugins/group/calico/Start 114.37
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.26
359 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
360 TestNetworkPlugins/group/bridge/NetCatPod 13.22
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
364 TestNetworkPlugins/group/bridge/DNS 0.16
365 TestNetworkPlugins/group/bridge/Localhost 0.13
366 TestNetworkPlugins/group/bridge/HairPin 0.13
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
369 TestNetworkPlugins/group/custom-flannel/DNS 0.14
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.19
374 TestNetworkPlugins/group/calico/NetCatPod 9.24
375 TestNetworkPlugins/group/calico/DNS 0.21
376 TestNetworkPlugins/group/calico/Localhost 0.12
377 TestNetworkPlugins/group/calico/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (42.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-565697 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-565697 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (42.231532239s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (42.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 17:06:42.809501   20680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1028 17:06:42.809616   20680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-565697
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-565697: exit status 85 (58.126189ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-565697 | jenkins | v1.34.0 | 28 Oct 24 17:06 UTC |          |
	|         | -p download-only-565697        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:06:00
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:06:00.615893   20693 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:06:00.616125   20693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:06:00.616135   20693 out.go:358] Setting ErrFile to fd 2...
	I1028 17:06:00.616139   20693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:06:00.616345   20693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	W1028 17:06:00.616505   20693 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19872-13443/.minikube/config/config.json: open /home/jenkins/minikube-integration/19872-13443/.minikube/config/config.json: no such file or directory
	I1028 17:06:00.617124   20693 out.go:352] Setting JSON to true
	I1028 17:06:00.617961   20693 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2904,"bootTime":1730132257,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:06:00.618018   20693 start.go:139] virtualization: kvm guest
	I1028 17:06:00.620275   20693 out.go:97] [download-only-565697] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1028 17:06:00.620375   20693 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 17:06:00.620418   20693 notify.go:220] Checking for updates...
	I1028 17:06:00.621871   20693 out.go:169] MINIKUBE_LOCATION=19872
	I1028 17:06:00.623277   20693 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:06:00.624984   20693 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:06:00.626751   20693 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:06:00.627957   20693 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 17:06:00.630325   20693 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 17:06:00.630521   20693 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:06:00.734655   20693 out.go:97] Using the kvm2 driver based on user configuration
	I1028 17:06:00.734703   20693 start.go:297] selected driver: kvm2
	I1028 17:06:00.734712   20693 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:06:00.735033   20693 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:06:00.735159   20693 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:06:00.749932   20693 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:06:00.749996   20693 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:06:00.750540   20693 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1028 17:06:00.750677   20693 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 17:06:00.750701   20693 cni.go:84] Creating CNI manager for ""
	I1028 17:06:00.750747   20693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:06:00.750755   20693 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 17:06:00.750801   20693 start.go:340] cluster config:
	{Name:download-only-565697 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-565697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:06:00.750967   20693 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:06:00.752705   20693 out.go:97] Downloading VM boot image ...
	I1028 17:06:00.752738   20693 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/iso/amd64/minikube-v1.34.0-1730109979-19872-amd64.iso
	I1028 17:06:16.270591   20693 out.go:97] Starting "download-only-565697" primary control-plane node in "download-only-565697" cluster
	I1028 17:06:16.270616   20693 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 17:06:16.432845   20693 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 17:06:16.432876   20693 cache.go:56] Caching tarball of preloaded images
	I1028 17:06:16.433003   20693 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 17:06:16.435245   20693 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 17:06:16.435266   20693 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1028 17:06:16.590571   20693 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-565697 host does not exist
	  To start a cluster, run: "minikube start -p download-only-565697"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-565697
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (21.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-852823 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-852823 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (21.685821519s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (21.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 17:07:04.805963   20680 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1028 17:07:04.805997   20680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-852823
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-852823: exit status 85 (58.375044ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-565697 | jenkins | v1.34.0 | 28 Oct 24 17:06 UTC |                     |
	|         | -p download-only-565697        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 17:06 UTC | 28 Oct 24 17:06 UTC |
	| delete  | -p download-only-565697        | download-only-565697 | jenkins | v1.34.0 | 28 Oct 24 17:06 UTC | 28 Oct 24 17:06 UTC |
	| start   | -o=json --download-only        | download-only-852823 | jenkins | v1.34.0 | 28 Oct 24 17:06 UTC |                     |
	|         | -p download-only-852823        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 17:06:43
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 17:06:43.158239   21023 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:06:43.158349   21023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:06:43.158359   21023 out.go:358] Setting ErrFile to fd 2...
	I1028 17:06:43.158366   21023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:06:43.158552   21023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:06:43.159109   21023 out.go:352] Setting JSON to true
	I1028 17:06:43.159910   21023 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2946,"bootTime":1730132257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:06:43.160001   21023 start.go:139] virtualization: kvm guest
	I1028 17:06:43.162068   21023 out.go:97] [download-only-852823] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:06:43.162211   21023 notify.go:220] Checking for updates...
	I1028 17:06:43.163531   21023 out.go:169] MINIKUBE_LOCATION=19872
	I1028 17:06:43.164909   21023 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:06:43.166367   21023 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:06:43.167497   21023 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:06:43.168992   21023 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 17:06:43.171189   21023 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 17:06:43.171397   21023 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:06:43.201760   21023 out.go:97] Using the kvm2 driver based on user configuration
	I1028 17:06:43.201791   21023 start.go:297] selected driver: kvm2
	I1028 17:06:43.201802   21023 start.go:901] validating driver "kvm2" against <nil>
	I1028 17:06:43.202122   21023 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:06:43.202210   21023 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19872-13443/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 17:06:43.216183   21023 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 17:06:43.216220   21023 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 17:06:43.216782   21023 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1028 17:06:43.216966   21023 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 17:06:43.216992   21023 cni.go:84] Creating CNI manager for ""
	I1028 17:06:43.217047   21023 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 17:06:43.217067   21023 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 17:06:43.217132   21023 start.go:340] cluster config:
	{Name:download-only-852823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-852823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:06:43.217229   21023 iso.go:125] acquiring lock: {Name:mk4bb7d4356e1dfb8d1c969e06867795cd3b0eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 17:06:43.218791   21023 out.go:97] Starting "download-only-852823" primary control-plane node in "download-only-852823" cluster
	I1028 17:06:43.218815   21023 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:06:43.450348   21023 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:06:43.450377   21023 cache.go:56] Caching tarball of preloaded images
	I1028 17:06:43.450531   21023 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 17:06:43.452417   21023 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 17:06:43.452431   21023 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 17:06:43.607681   21023 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 17:07:03.110471   21023 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 17:07:03.110570   21023 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19872-13443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-852823 host does not exist
	  To start a cluster, run: "minikube start -p download-only-852823"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-852823
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 17:07:05.341961   20680 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-523787 --alsologtostderr --binary-mirror http://127.0.0.1:45457 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-523787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-523787
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (94.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-146010 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-146010 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m33.554193385s)
helpers_test.go:175: Cleaning up "offline-crio-146010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-146010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-146010: (1.049183068s)
--- PASS: TestOffline (94.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-186035
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-186035: exit status 85 (55.336817ms)

                                                
                                                
-- stdout --
	* Profile "addons-186035" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-186035"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-186035
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-186035: exit status 85 (56.435743ms)

                                                
                                                
-- stdout --
	* Profile "addons-186035" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-186035"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (204.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-186035 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-186035 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m24.206044067s)
--- PASS: TestAddons/Setup (204.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.79s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-186035 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-186035 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-186035 get secret gcp-auth -n new-namespace: exit status 1 (80.398659ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-186035 logs -l app=gcp-auth -n gcp-auth
I1028 17:10:30.664364   20680 retry.go:31] will retry after 2.53635203s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/10/28 17:10:29 GCP Auth Webhook started!
	2024/10/28 17:10:30 Ready to marshal response ...
	2024/10/28 17:10:30 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-186035 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (13.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-186035 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-186035 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5783d2c6-cf3e-4775-9b0d-19fc4b151df3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5783d2c6-cf3e-4775-9b0d-19fc4b151df3] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 13.008463515s
addons_test.go:633: (dbg) Run:  kubectl --context addons-186035 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-186035 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-186035 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (13.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.330877ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zzlqq" [b84d4f13-3ad1-4d7c-81fc-5def543dae51] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00301078s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7nj9m" [783bc207-34a0-49f6-a31b-d358ca0aa6e3] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003893065s
addons_test.go:331: (dbg) Run:  kubectl --context addons-186035 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-186035 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-186035 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.46364541s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 ip
2024/10/28 17:11:14 [DEBUG] GET http://192.168.39.15:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xchg7" [045e4dca-eca4-45e3-bd09-129b3c53fff6] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004687387s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 addons disable inspektor-gadget --alsologtostderr -v=1: (5.899840475s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 17:11:14.926995   20680 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1028 17:11:14.932096   20680 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 17:11:14.932121   20680 kapi.go:107] duration metric: took 5.141861ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.152738ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-186035 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-186035 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d87316ca-f4b6-4889-b07b-3ff2559bdbfa] Pending
helpers_test.go:344: "task-pv-pod" [d87316ca-f4b6-4889-b07b-3ff2559bdbfa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d87316ca-f4b6-4889-b07b-3ff2559bdbfa] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003236469s
addons_test.go:511: (dbg) Run:  kubectl --context addons-186035 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-186035 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-186035 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-186035 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-186035 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-186035 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-186035 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9521741b-1fd2-4c79-904e-5c0457733369] Pending
helpers_test.go:344: "task-pv-pod-restore" [9521741b-1fd2-4c79-904e-5c0457733369] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9521741b-1fd2-4c79-904e-5c0457733369] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003993478s
addons_test.go:553: (dbg) Run:  kubectl --context addons-186035 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-186035 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-186035 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.679673687s)
--- PASS: TestAddons/parallel/CSI (60.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-186035 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-trw8b" [2b9d3014-a0e0-4520-885d-4d2f69ac8346] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-trw8b" [2b9d3014-a0e0-4520-885d-4d2f69ac8346] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-trw8b" [2b9d3014-a0e0-4520-885d-4d2f69ac8346] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00386736s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 addons disable headlamp --alsologtostderr -v=1: (5.89118886s)
--- PASS: TestAddons/parallel/Headlamp (19.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-cxvcn" [ed963b39-0b15-4339-a834-97b86a9294c3] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003168748s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-186035 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-186035 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b0b8743e-57e9-4afd-b93f-ed65c783831a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b0b8743e-57e9-4afd-b93f-ed65c783831a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b0b8743e-57e9-4afd-b93f-ed65c783831a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004278633s
addons_test.go:906: (dbg) Run:  kubectl --context addons-186035 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 ssh "cat /opt/local-path-provisioner/pvc-055034d5-d0f2-4684-852f-71b9bf776565_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-186035 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-186035 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.72953634s)
--- PASS: TestAddons/parallel/LocalPath (58.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rtk85" [cf1f792a-317b-462d-bd89-3d40fc15ae2e] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00356643s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-gw2db" [df9e1c49-df24-41a8-b38a-cf64b68716ab] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003708806s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-186035 addons disable yakd --alsologtostderr -v=1: (5.649913978s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestCertOptions (85.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-040988 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-040988 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m24.323532785s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-040988 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-040988 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-040988 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-040988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-040988
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-040988: (1.004000413s)
--- PASS: TestCertOptions (85.78s)

                                                
                                    
x
+
TestCertExpiration (282.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-559364 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-559364 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m2.376543192s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-559364 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-559364 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.991592079s)
helpers_test.go:175: Cleaning up "cert-expiration-559364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-559364
--- PASS: TestCertExpiration (282.17s)

                                                
                                    
x
+
TestForceSystemdFlag (59.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-889327 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-889327 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.334476992s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-889327 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-889327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-889327
--- PASS: TestForceSystemdFlag (59.28s)

                                                
                                    
x
+
TestForceSystemdEnv (83.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-806978 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-806978 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.793441119s)
helpers_test.go:175: Cleaning up "force-systemd-env-806978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-806978
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-806978: (1.148976136s)
--- PASS: TestForceSystemdEnv (83.94s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (12.47s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1028 18:12:14.998762   20680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 18:12:14.998905   20680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1028 18:12:15.032776   20680 install.go:62] docker-machine-driver-kvm2: exit status 1
W1028 18:12:15.033087   20680 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 18:12:15.033153   20680 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4277925268/001/docker-machine-driver-kvm2
I1028 18:12:15.654769   20680 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4277925268/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000792400 gz:0xc000792408 tar:0xc0007923a0 tar.bz2:0xc0007923c0 tar.gz:0xc0007923d0 tar.xz:0xc0007923e0 tar.zst:0xc0007923f0 tbz2:0xc0007923c0 tgz:0xc0007923d0 txz:0xc0007923e0 tzst:0xc0007923f0 xz:0xc000792410 zip:0xc000792420 zst:0xc000792418] Getters:map[file:0xc000c02bf0 http:0xc00073eb40 https:0xc00073eb90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 18:12:15.654813   20680 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4277925268/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (12.47s)

                                                
                                    
x
+
TestErrorSpam/setup (38.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-575099 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-575099 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-575099 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-575099 --driver=kvm2  --container-runtime=crio: (38.763851812s)
--- PASS: TestErrorSpam/setup (38.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 stop: (2.319372217s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 stop: (1.171369543s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-575099 --log_dir /tmp/nospam-575099 stop: (1.541542221s)
--- PASS: TestErrorSpam/stop (5.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19872-13443/.minikube/files/etc/test/nested/copy/20680/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-972498 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1028 17:20:33.437550   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:33.443890   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:33.455254   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:33.476541   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:33.517911   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:33.599334   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:33.760906   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:34.082650   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:34.724697   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:36.006291   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:38.569166   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:43.690863   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:20:53.933042   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:21:14.414484   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-972498 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m29.057895653s)
--- PASS: TestFunctional/serial/StartWithProxy (89.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 17:21:54.557674   20680 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-972498 --alsologtostderr -v=8
E1028 17:21:55.376272   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-972498 --alsologtostderr -v=8: (55.557693068s)
functional_test.go:663: soft start took 55.558357626s for "functional-972498" cluster.
I1028 17:22:50.115696   20680 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (55.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-972498 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 cache add registry.k8s.io/pause:3.1: (1.155748217s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 cache add registry.k8s.io/pause:3.3: (1.192754353s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 cache add registry.k8s.io/pause:latest: (1.129902515s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-972498 /tmp/TestFunctionalserialCacheCmdcacheadd_local4014830853/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cache add minikube-local-cache-test:functional-972498
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 cache add minikube-local-cache-test:functional-972498: (2.368918631s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cache delete minikube-local-cache-test:functional-972498
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-972498
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.211526ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 kubectl -- --context functional-972498 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-972498 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-972498 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1028 17:23:17.300261   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-972498 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.334183593s)
functional_test.go:761: restart took 31.3342957s for "functional-972498" cluster.
I1028 17:23:29.884104   20680 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (31.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-972498 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 logs: (1.363299756s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 logs --file /tmp/TestFunctionalserialLogsFileCmd401133024/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 logs --file /tmp/TestFunctionalserialLogsFileCmd401133024/001/logs.txt: (1.406855125s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-972498 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-972498
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-972498: exit status 115 (278.181594ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.15:32707 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-972498 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-972498 delete -f testdata/invalidsvc.yaml: (1.314651216s)
--- PASS: TestFunctional/serial/InvalidService (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 config get cpus: exit status 14 (46.062516ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 config get cpus: exit status 14 (45.218485ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-972498 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-972498 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 31038: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-972498 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-972498 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.433065ms)

                                                
                                                
-- stdout --
	* [functional-972498] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:24:05.230086   30908 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:24:05.230527   30908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:05.230582   30908 out.go:358] Setting ErrFile to fd 2...
	I1028 17:24:05.230603   30908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:05.231004   30908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:24:05.231799   30908 out.go:352] Setting JSON to false
	I1028 17:24:05.232648   30908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3988,"bootTime":1730132257,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:24:05.232742   30908 start.go:139] virtualization: kvm guest
	I1028 17:24:05.234495   30908 out.go:177] * [functional-972498] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 17:24:05.236089   30908 notify.go:220] Checking for updates...
	I1028 17:24:05.236101   30908 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:24:05.237503   30908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:24:05.238726   30908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:24:05.239853   30908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:05.240957   30908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:24:05.242214   30908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:24:05.243841   30908 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:24:05.244282   30908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:05.244334   30908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:05.259598   30908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I1028 17:24:05.260098   30908 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:05.260695   30908 main.go:141] libmachine: Using API Version  1
	I1028 17:24:05.260717   30908 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:05.260986   30908 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:05.261154   30908 main.go:141] libmachine: (functional-972498) Calling .DriverName
	I1028 17:24:05.261356   30908 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:24:05.261630   30908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:05.261662   30908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:05.276071   30908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I1028 17:24:05.276528   30908 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:05.276952   30908 main.go:141] libmachine: Using API Version  1
	I1028 17:24:05.276975   30908 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:05.277244   30908 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:05.277415   30908 main.go:141] libmachine: (functional-972498) Calling .DriverName
	I1028 17:24:05.308889   30908 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 17:24:05.310395   30908 start.go:297] selected driver: kvm2
	I1028 17:24:05.310405   30908 start.go:901] validating driver "kvm2" against &{Name:functional-972498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-972498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:24:05.310485   30908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:24:05.312852   30908 out.go:201] 
	W1028 17:24:05.314344   30908 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 17:24:05.315463   30908 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-972498 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-972498 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-972498 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.845568ms)

                                                
                                                
-- stdout --
	* [functional-972498] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:24:05.502512   30975 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:24:05.502777   30975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:05.502787   30975 out.go:358] Setting ErrFile to fd 2...
	I1028 17:24:05.502791   30975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:24:05.503078   30975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:24:05.503615   30975 out.go:352] Setting JSON to false
	I1028 17:24:05.504533   30975 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3988,"bootTime":1730132257,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 17:24:05.504593   30975 start.go:139] virtualization: kvm guest
	I1028 17:24:05.506813   30975 out.go:177] * [functional-972498] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1028 17:24:05.507996   30975 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 17:24:05.507996   30975 notify.go:220] Checking for updates...
	I1028 17:24:05.509229   30975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 17:24:05.510508   30975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 17:24:05.511675   30975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 17:24:05.512840   30975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 17:24:05.513999   30975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 17:24:05.515725   30975 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:24:05.516097   30975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:05.516156   30975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:05.531261   30975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I1028 17:24:05.531737   30975 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:05.532321   30975 main.go:141] libmachine: Using API Version  1
	I1028 17:24:05.532339   30975 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:05.532792   30975 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:05.533015   30975 main.go:141] libmachine: (functional-972498) Calling .DriverName
	I1028 17:24:05.533313   30975 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 17:24:05.533743   30975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:24:05.533794   30975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:24:05.548413   30975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I1028 17:24:05.548908   30975 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:24:05.549457   30975 main.go:141] libmachine: Using API Version  1
	I1028 17:24:05.549482   30975 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:24:05.549785   30975 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:24:05.549962   30975 main.go:141] libmachine: (functional-972498) Calling .DriverName
	I1028 17:24:05.581847   30975 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1028 17:24:05.583153   30975 start.go:297] selected driver: kvm2
	I1028 17:24:05.583173   30975 start.go:901] validating driver "kvm2" against &{Name:functional-972498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19872/minikube-v1.34.0-1730109979-19872-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-972498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 17:24:05.583323   30975 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 17:24:05.585678   30975 out.go:201] 
	W1028 17:24:05.587003   30975 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 17:24:05.588274   30975 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-972498 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-972498 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9wmv9" [ac7838a8-d16f-4439-ac87-74c1f049921e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9wmv9" [ac7838a8-d16f-4439-ac87-74c1f049921e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003504057s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.15:31786
functional_test.go:1675: http://192.168.39.15:31786: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9wmv9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.15:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.15:31786
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3948ef0d-1dcf-4c8d-a325-b8073108bc39] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003504262s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-972498 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-972498 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-972498 get pvc myclaim -o=json
I1028 17:23:45.886648   20680 retry.go:31] will retry after 2.1396434s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:d81ca1d9-8e22-4d5f-be48-b76c8ddda150 ResourceVersion:842 Generation:0 CreationTimestamp:2024-10-28 17:23:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-d81ca1d9-8e22-4d5f-be48-b76c8ddda150 StorageClassName:0xc001c54e10 VolumeMode:0xc001c54e20 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-972498 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-972498 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4225c917-db31-47ef-8347-75f9a6cceeca] Pending
helpers_test.go:344: "sp-pod" [4225c917-db31-47ef-8347-75f9a6cceeca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4225c917-db31-47ef-8347-75f9a6cceeca] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.003827745s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-972498 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-972498 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-972498 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6aa856a9-c272-4da3-a630-b4a1227c783b] Pending
helpers_test.go:344: "sp-pod" [6aa856a9-c272-4da3-a630-b4a1227c783b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6aa856a9-c272-4da3-a630-b4a1227c783b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.0039174s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-972498 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh -n functional-972498 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cp functional-972498:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd482960378/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh -n functional-972498 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh -n functional-972498 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-972498 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-7677n" [3a08da94-c3f4-406c-8ff9-0b7489c9c86e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-7677n" [3a08da94-c3f4-406c-8ff9-0b7489c9c86e] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.007091707s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-972498 exec mysql-6cdb49bbb-7677n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-972498 exec mysql-6cdb49bbb-7677n -- mysql -ppassword -e "show databases;": exit status 1 (268.964262ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 17:24:03.011904   20680 retry.go:31] will retry after 888.728752ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-972498 exec mysql-6cdb49bbb-7677n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-972498 exec mysql-6cdb49bbb-7677n -- mysql -ppassword -e "show databases;": exit status 1 (235.993872ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 17:24:04.137832   20680 retry.go:31] will retry after 1.22774259s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-972498 exec mysql-6cdb49bbb-7677n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-972498 exec mysql-6cdb49bbb-7677n -- mysql -ppassword -e "show databases;": exit status 1 (259.462933ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 17:24:05.625868   20680 retry.go:31] will retry after 1.489344668s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-972498 exec mysql-6cdb49bbb-7677n -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.81s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20680/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo cat /etc/test/nested/copy/20680/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20680.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo cat /etc/ssl/certs/20680.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20680.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo cat /usr/share/ca-certificates/20680.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/206802.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo cat /etc/ssl/certs/206802.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/206802.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo cat /usr/share/ca-certificates/206802.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-972498 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh "sudo systemctl is-active docker": exit status 1 (191.540313ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh "sudo systemctl is-active containerd": exit status 1 (200.179346ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-972498 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-972498
localhost/kicbase/echo-server:functional-972498
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-972498 image ls --format short --alsologtostderr:
I1028 17:24:07.912308   31092 out.go:345] Setting OutFile to fd 1 ...
I1028 17:24:07.912419   31092 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:07.912427   31092 out.go:358] Setting ErrFile to fd 2...
I1028 17:24:07.912430   31092 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:07.912619   31092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
I1028 17:24:07.913163   31092 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:07.913253   31092 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:07.913592   31092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:07.913644   31092 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:07.928233   31092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40895
I1028 17:24:07.928677   31092 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:07.929279   31092 main.go:141] libmachine: Using API Version  1
I1028 17:24:07.929318   31092 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:07.929596   31092 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:07.929778   31092 main.go:141] libmachine: (functional-972498) Calling .GetState
I1028 17:24:07.931374   31092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:07.931408   31092 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:07.946749   31092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
I1028 17:24:07.947191   31092 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:07.947696   31092 main.go:141] libmachine: Using API Version  1
I1028 17:24:07.947717   31092 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:07.948042   31092 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:07.948199   31092 main.go:141] libmachine: (functional-972498) Calling .DriverName
I1028 17:24:07.948399   31092 ssh_runner.go:195] Run: systemctl --version
I1028 17:24:07.948427   31092 main.go:141] libmachine: (functional-972498) Calling .GetSSHHostname
I1028 17:24:07.950662   31092 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:07.950981   31092 main.go:141] libmachine: (functional-972498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:5f", ip: ""} in network mk-functional-972498: {Iface:virbr1 ExpiryTime:2024-10-28 18:20:40 +0000 UTC Type:0 Mac:52:54:00:d5:51:5f Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-972498 Clientid:01:52:54:00:d5:51:5f}
I1028 17:24:07.951020   31092 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined IP address 192.168.39.15 and MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:07.951130   31092 main.go:141] libmachine: (functional-972498) Calling .GetSSHPort
I1028 17:24:07.951268   31092 main.go:141] libmachine: (functional-972498) Calling .GetSSHKeyPath
I1028 17:24:07.951411   31092 main.go:141] libmachine: (functional-972498) Calling .GetSSHUsername
I1028 17:24:07.951506   31092 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/functional-972498/id_rsa Username:docker}
I1028 17:24:08.035991   31092 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:24:08.087079   31092 main.go:141] libmachine: Making call to close driver server
I1028 17:24:08.087091   31092 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:08.087434   31092 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:08.087435   31092 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:08.087460   31092 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 17:24:08.087474   31092 main.go:141] libmachine: Making call to close driver server
I1028 17:24:08.087486   31092 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:08.087712   31092 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:08.087742   31092 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 17:24:08.087761   31092 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-972498 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-972498  | 96b88ff7761e8 | 3.33kB |
| localhost/my-image                      | functional-972498  | dc9bf789d65b8 | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-972498  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-972498 image ls --format table --alsologtostderr:
I1028 17:24:12.761336   31344 out.go:345] Setting OutFile to fd 1 ...
I1028 17:24:12.761428   31344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:12.761435   31344 out.go:358] Setting ErrFile to fd 2...
I1028 17:24:12.761439   31344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:12.761635   31344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
I1028 17:24:12.762162   31344 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:12.762287   31344 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:12.762648   31344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:12.762697   31344 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:12.777355   31344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
I1028 17:24:12.777795   31344 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:12.778374   31344 main.go:141] libmachine: Using API Version  1
I1028 17:24:12.778396   31344 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:12.778662   31344 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:12.778815   31344 main.go:141] libmachine: (functional-972498) Calling .GetState
I1028 17:24:12.780366   31344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:12.780399   31344 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:12.794277   31344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33751
I1028 17:24:12.794689   31344 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:12.795201   31344 main.go:141] libmachine: Using API Version  1
I1028 17:24:12.795223   31344 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:12.795532   31344 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:12.795687   31344 main.go:141] libmachine: (functional-972498) Calling .DriverName
I1028 17:24:12.795860   31344 ssh_runner.go:195] Run: systemctl --version
I1028 17:24:12.795880   31344 main.go:141] libmachine: (functional-972498) Calling .GetSSHHostname
I1028 17:24:12.798303   31344 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:12.798685   31344 main.go:141] libmachine: (functional-972498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:5f", ip: ""} in network mk-functional-972498: {Iface:virbr1 ExpiryTime:2024-10-28 18:20:40 +0000 UTC Type:0 Mac:52:54:00:d5:51:5f Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-972498 Clientid:01:52:54:00:d5:51:5f}
I1028 17:24:12.798710   31344 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined IP address 192.168.39.15 and MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:12.798871   31344 main.go:141] libmachine: (functional-972498) Calling .GetSSHPort
I1028 17:24:12.798996   31344 main.go:141] libmachine: (functional-972498) Calling .GetSSHKeyPath
I1028 17:24:12.799156   31344 main.go:141] libmachine: (functional-972498) Calling .GetSSHUsername
I1028 17:24:12.799252   31344 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/functional-972498/id_rsa Username:docker}
I1028 17:24:12.890143   31344 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:24:12.933807   31344 main.go:141] libmachine: Making call to close driver server
I1028 17:24:12.933825   31344 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:12.934046   31344 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:12.934066   31344 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 17:24:12.934069   31344 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:12.934086   31344 main.go:141] libmachine: Making call to close driver server
I1028 17:24:12.934094   31344 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:12.934273   31344 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:12.934325   31344 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:12.934339   31344 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-972498 image ls --format json --alsologtostderr:
[{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"dc9bf789d65b85175714d47d11f8bf4563e14879cfb9fa2587972eaa32682621","repoDigests":["localhost/my-image@sha256:3960c9376c03b6b20e323a917641b95a8f0f01ce53d6c4dc3c418bcf986e8a76"],"repoTags":["localhost/my-image:functional-972498"],"size":"1468600"},{"id":"82e4c8a736a
4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535e
fb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhos
t/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-972498"],"size":"4943877"},{"id":"96b88ff7761e843b87ce95f8970c093ec71a2aa7be8fc0b48fb76b6cb9e6d4b8","repoDigests":["localhost/minikube-local-cache-test@sha256:91fa37dc68756e20cefb495c5fe69217e0443a1ff98d3bdb953e682f044fa885"],"repoTags":["localhost/minikube-local-cache-test:functional-972498"],"size":"3330"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03
c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"35cc2caa633cacd167d0003d0cefee73e0ca938b6db2ed7b95e2d0c7e683ab3f","repoDigests":["docker.io/library/627dc8efd5e0758db454e4e539d4f56587ea4ef48392abe457da2b0bdd5d30b0-tmp@sha256:918620aa22960886aec1c2f76795a414e1c8848986b8d16f76f4de5dc03a1983"],"repoTags":[],"size":"1466018"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c783825
59c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94
434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-972498 image ls --format json --alsologtostderr:
I1028 17:24:12.518104   31281 out.go:345] Setting OutFile to fd 1 ...
I1028 17:24:12.518370   31281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:12.518380   31281 out.go:358] Setting ErrFile to fd 2...
I1028 17:24:12.518384   31281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:12.518654   31281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
I1028 17:24:12.519447   31281 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:12.519581   31281 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:12.520077   31281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:12.520125   31281 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:12.534769   31281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41953
I1028 17:24:12.535178   31281 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:12.535705   31281 main.go:141] libmachine: Using API Version  1
I1028 17:24:12.535718   31281 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:12.536015   31281 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:12.536174   31281 main.go:141] libmachine: (functional-972498) Calling .GetState
I1028 17:24:12.537918   31281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:12.537952   31281 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:12.551774   31281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43739
I1028 17:24:12.552060   31281 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:12.552518   31281 main.go:141] libmachine: Using API Version  1
I1028 17:24:12.552543   31281 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:12.552808   31281 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:12.552980   31281 main.go:141] libmachine: (functional-972498) Calling .DriverName
I1028 17:24:12.553179   31281 ssh_runner.go:195] Run: systemctl --version
I1028 17:24:12.553208   31281 main.go:141] libmachine: (functional-972498) Calling .GetSSHHostname
I1028 17:24:12.555809   31281 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:12.556131   31281 main.go:141] libmachine: (functional-972498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:5f", ip: ""} in network mk-functional-972498: {Iface:virbr1 ExpiryTime:2024-10-28 18:20:40 +0000 UTC Type:0 Mac:52:54:00:d5:51:5f Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-972498 Clientid:01:52:54:00:d5:51:5f}
I1028 17:24:12.556157   31281 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined IP address 192.168.39.15 and MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:12.556287   31281 main.go:141] libmachine: (functional-972498) Calling .GetSSHPort
I1028 17:24:12.556451   31281 main.go:141] libmachine: (functional-972498) Calling .GetSSHKeyPath
I1028 17:24:12.556605   31281 main.go:141] libmachine: (functional-972498) Calling .GetSSHUsername
I1028 17:24:12.556718   31281 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/functional-972498/id_rsa Username:docker}
I1028 17:24:12.646070   31281 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:24:12.702032   31281 main.go:141] libmachine: Making call to close driver server
I1028 17:24:12.702052   31281 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:12.702295   31281 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:12.702314   31281 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:12.702317   31281 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 17:24:12.702336   31281 main.go:141] libmachine: Making call to close driver server
I1028 17:24:12.702347   31281 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:12.702646   31281 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:12.702657   31281 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:12.702669   31281 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-972498 image ls --format yaml --alsologtostderr:
- id: 96b88ff7761e843b87ce95f8970c093ec71a2aa7be8fc0b48fb76b6cb9e6d4b8
repoDigests:
- localhost/minikube-local-cache-test@sha256:91fa37dc68756e20cefb495c5fe69217e0443a1ff98d3bdb953e682f044fa885
repoTags:
- localhost/minikube-local-cache-test:functional-972498
size: "3330"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-972498
size: "4943877"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-972498 image ls --format yaml --alsologtostderr:
I1028 17:24:08.133717   31116 out.go:345] Setting OutFile to fd 1 ...
I1028 17:24:08.133812   31116 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:08.133820   31116 out.go:358] Setting ErrFile to fd 2...
I1028 17:24:08.133824   31116 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:08.133980   31116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
I1028 17:24:08.134528   31116 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:08.134617   31116 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:08.134945   31116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:08.134980   31116 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:08.149303   31116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
I1028 17:24:08.149797   31116 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:08.150339   31116 main.go:141] libmachine: Using API Version  1
I1028 17:24:08.150360   31116 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:08.150678   31116 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:08.150840   31116 main.go:141] libmachine: (functional-972498) Calling .GetState
I1028 17:24:08.152491   31116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:08.152524   31116 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:08.166382   31116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
I1028 17:24:08.166796   31116 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:08.167255   31116 main.go:141] libmachine: Using API Version  1
I1028 17:24:08.167285   31116 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:08.167591   31116 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:08.167766   31116 main.go:141] libmachine: (functional-972498) Calling .DriverName
I1028 17:24:08.167959   31116 ssh_runner.go:195] Run: systemctl --version
I1028 17:24:08.167979   31116 main.go:141] libmachine: (functional-972498) Calling .GetSSHHostname
I1028 17:24:08.170399   31116 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:08.170733   31116 main.go:141] libmachine: (functional-972498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:5f", ip: ""} in network mk-functional-972498: {Iface:virbr1 ExpiryTime:2024-10-28 18:20:40 +0000 UTC Type:0 Mac:52:54:00:d5:51:5f Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-972498 Clientid:01:52:54:00:d5:51:5f}
I1028 17:24:08.170761   31116 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined IP address 192.168.39.15 and MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:08.170867   31116 main.go:141] libmachine: (functional-972498) Calling .GetSSHPort
I1028 17:24:08.171014   31116 main.go:141] libmachine: (functional-972498) Calling .GetSSHKeyPath
I1028 17:24:08.171126   31116 main.go:141] libmachine: (functional-972498) Calling .GetSSHUsername
I1028 17:24:08.171273   31116 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/functional-972498/id_rsa Username:docker}
I1028 17:24:08.250890   31116 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:24:08.289537   31116 main.go:141] libmachine: Making call to close driver server
I1028 17:24:08.289558   31116 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:08.289831   31116 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:08.289855   31116 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:08.289869   31116 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 17:24:08.289879   31116 main.go:141] libmachine: Making call to close driver server
I1028 17:24:08.289887   31116 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:08.290104   31116 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:08.290123   31116 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:08.290132   31116 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh pgrep buildkitd: exit status 1 (180.490558ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image build -t localhost/my-image:functional-972498 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 image build -t localhost/my-image:functional-972498 testdata/build --alsologtostderr: (3.738588129s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-972498 image build -t localhost/my-image:functional-972498 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 35cc2caa633
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-972498
--> dc9bf789d65
Successfully tagged localhost/my-image:functional-972498
dc9bf789d65b85175714d47d11f8bf4563e14879cfb9fa2587972eaa32682621
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-972498 image build -t localhost/my-image:functional-972498 testdata/build --alsologtostderr:
I1028 17:24:08.515145   31185 out.go:345] Setting OutFile to fd 1 ...
I1028 17:24:08.515265   31185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:08.515273   31185 out.go:358] Setting ErrFile to fd 2...
I1028 17:24:08.515277   31185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:24:08.515472   31185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
I1028 17:24:08.516004   31185 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:08.516570   31185 config.go:182] Loaded profile config "functional-972498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 17:24:08.516923   31185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:08.516992   31185 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:08.531147   31185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39811
I1028 17:24:08.531622   31185 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:08.532114   31185 main.go:141] libmachine: Using API Version  1
I1028 17:24:08.532137   31185 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:08.532450   31185 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:08.532649   31185 main.go:141] libmachine: (functional-972498) Calling .GetState
I1028 17:24:08.534279   31185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 17:24:08.534322   31185 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 17:24:08.548199   31185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
I1028 17:24:08.548656   31185 main.go:141] libmachine: () Calling .GetVersion
I1028 17:24:08.549068   31185 main.go:141] libmachine: Using API Version  1
I1028 17:24:08.549092   31185 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 17:24:08.549372   31185 main.go:141] libmachine: () Calling .GetMachineName
I1028 17:24:08.549506   31185 main.go:141] libmachine: (functional-972498) Calling .DriverName
I1028 17:24:08.549701   31185 ssh_runner.go:195] Run: systemctl --version
I1028 17:24:08.549721   31185 main.go:141] libmachine: (functional-972498) Calling .GetSSHHostname
I1028 17:24:08.552140   31185 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:08.552530   31185 main.go:141] libmachine: (functional-972498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:5f", ip: ""} in network mk-functional-972498: {Iface:virbr1 ExpiryTime:2024-10-28 18:20:40 +0000 UTC Type:0 Mac:52:54:00:d5:51:5f Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-972498 Clientid:01:52:54:00:d5:51:5f}
I1028 17:24:08.552557   31185 main.go:141] libmachine: (functional-972498) DBG | domain functional-972498 has defined IP address 192.168.39.15 and MAC address 52:54:00:d5:51:5f in network mk-functional-972498
I1028 17:24:08.552728   31185 main.go:141] libmachine: (functional-972498) Calling .GetSSHPort
I1028 17:24:08.552883   31185 main.go:141] libmachine: (functional-972498) Calling .GetSSHKeyPath
I1028 17:24:08.553047   31185 main.go:141] libmachine: (functional-972498) Calling .GetSSHUsername
I1028 17:24:08.553165   31185 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/functional-972498/id_rsa Username:docker}
I1028 17:24:08.639385   31185 build_images.go:161] Building image from path: /tmp/build.2502116245.tar
I1028 17:24:08.639438   31185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 17:24:08.652578   31185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2502116245.tar
I1028 17:24:08.656806   31185 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2502116245.tar: stat -c "%s %y" /var/lib/minikube/build/build.2502116245.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2502116245.tar': No such file or directory
I1028 17:24:08.656832   31185 ssh_runner.go:362] scp /tmp/build.2502116245.tar --> /var/lib/minikube/build/build.2502116245.tar (3072 bytes)
I1028 17:24:08.685272   31185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2502116245
I1028 17:24:08.695576   31185 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2502116245 -xf /var/lib/minikube/build/build.2502116245.tar
I1028 17:24:08.704992   31185 crio.go:315] Building image: /var/lib/minikube/build/build.2502116245
I1028 17:24:08.705080   31185 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-972498 /var/lib/minikube/build/build.2502116245 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1028 17:24:12.173723   31185 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-972498 /var/lib/minikube/build/build.2502116245 --cgroup-manager=cgroupfs: (3.468616234s)
I1028 17:24:12.173806   31185 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2502116245
I1028 17:24:12.197083   31185 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2502116245.tar
I1028 17:24:12.209181   31185 build_images.go:217] Built localhost/my-image:functional-972498 from /tmp/build.2502116245.tar
I1028 17:24:12.209209   31185 build_images.go:133] succeeded building to: functional-972498
I1028 17:24:12.209214   31185 build_images.go:134] failed building to: 
I1028 17:24:12.209258   31185 main.go:141] libmachine: Making call to close driver server
I1028 17:24:12.209270   31185 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:12.209520   31185 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:12.209533   31185 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 17:24:12.209542   31185 main.go:141] libmachine: Making call to close driver server
I1028 17:24:12.209548   31185 main.go:141] libmachine: (functional-972498) Calling .Close
I1028 17:24:12.209772   31185 main.go:141] libmachine: (functional-972498) DBG | Closing plugin on server side
I1028 17:24:12.209830   31185 main.go:141] libmachine: Successfully made call to close driver server
I1028 17:24:12.209844   31185 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.649562045s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-972498
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image load --daemon kicbase/echo-server:functional-972498 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 image load --daemon kicbase/echo-server:functional-972498 --alsologtostderr: (2.275004645s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image load --daemon kicbase/echo-server:functional-972498 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.186252306s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-972498
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image load --daemon kicbase/echo-server:functional-972498 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image save kicbase/echo-server:functional-972498 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image rm kicbase/echo-server:functional-972498 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 image rm kicbase/echo-server:functional-972498 --alsologtostderr: (1.051205582s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.890999155s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 image ls: (1.103433705s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-972498 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-972498 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-7d8jr" [a9c8b89b-c71b-42c2-8487-4e2bc3378e8a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-7d8jr" [a9c8b89b-c71b-42c2-8487-4e2bc3378e8a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.014780042s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-972498
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 image save --daemon kicbase/echo-server:functional-972498 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-972498 image save --daemon kicbase/echo-server:functional-972498 --alsologtostderr: (3.924405374s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-972498
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "254.858758ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.770497ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "263.690408ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.463865ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdany-port2876058784/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730136237863388244" to /tmp/TestFunctionalparallelMountCmdany-port2876058784/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730136237863388244" to /tmp/TestFunctionalparallelMountCmdany-port2876058784/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730136237863388244" to /tmp/TestFunctionalparallelMountCmdany-port2876058784/001/test-1730136237863388244
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.272482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 17:23:58.056927   20680 retry.go:31] will retry after 601.527914ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 28 17:23 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 28 17:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 28 17:23 test-1730136237863388244
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh cat /mount-9p/test-1730136237863388244
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-972498 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8e8725a3-0d5b-4742-bd3e-be3e2ba9df78] Pending
helpers_test.go:344: "busybox-mount" [8e8725a3-0d5b-4742-bd3e-be3e2ba9df78] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8e8725a3-0d5b-4742-bd3e-be3e2ba9df78] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8e8725a3-0d5b-4742-bd3e-be3e2ba9df78] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.003945837s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-972498 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdany-port2876058784/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 service list -o json
functional_test.go:1494: Took "456.530613ms" to run "out/minikube-linux-amd64 -p functional-972498 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.15:31050
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.15:31050
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdspecific-port1741292928/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (186.703975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 17:24:13.523025   20680 retry.go:31] will retry after 693.376434ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdspecific-port1741292928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh "sudo umount -f /mount-9p": exit status 1 (184.489788ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-972498 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdspecific-port1741292928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2923582059/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2923582059/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2923582059/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T" /mount1: exit status 1 (251.792984ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 17:24:15.437371   20680 retry.go:31] will retry after 394.504731ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-972498 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-972498 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2923582059/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2923582059/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-972498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2923582059/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/10/28 17:24:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-972498
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-972498
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-972498
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-381619 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 17:25:33.435581   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:26:01.142421   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-381619 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.913448641s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (205.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-381619 -- rollout status deployment/busybox: (8.175422856s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-26cg9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-9n6bb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-dxwnw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-26cg9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-9n6bb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-dxwnw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-26cg9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-9n6bb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-dxwnw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-26cg9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-26cg9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-9n6bb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-9n6bb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-dxwnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-381619 -- exec busybox-7dff88458-dxwnw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-381619 -v=7 --alsologtostderr
E1028 17:28:38.395828   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:38.402247   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:38.413644   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:38.435025   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:38.476492   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:38.557712   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:38.719959   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:39.041551   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:39.684430   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:40.966152   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:43.528291   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:48.650566   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:28:58.891946   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-381619 -v=7 --alsologtostderr: (1m0.617514876s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-381619 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp testdata/cp-test.txt ha-381619:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619:/home/docker/cp-test.txt ha-381619-m02:/home/docker/cp-test_ha-381619_ha-381619-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test_ha-381619_ha-381619-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619:/home/docker/cp-test.txt ha-381619-m03:/home/docker/cp-test_ha-381619_ha-381619-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test_ha-381619_ha-381619-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619:/home/docker/cp-test.txt ha-381619-m04:/home/docker/cp-test_ha-381619_ha-381619-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test_ha-381619_ha-381619-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp testdata/cp-test.txt ha-381619-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m02:/home/docker/cp-test.txt ha-381619:/home/docker/cp-test_ha-381619-m02_ha-381619.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test_ha-381619-m02_ha-381619.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m02:/home/docker/cp-test.txt ha-381619-m03:/home/docker/cp-test_ha-381619-m02_ha-381619-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test_ha-381619-m02_ha-381619-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m02:/home/docker/cp-test.txt ha-381619-m04:/home/docker/cp-test_ha-381619-m02_ha-381619-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test_ha-381619-m02_ha-381619-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp testdata/cp-test.txt ha-381619-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test.txt"
E1028 17:29:19.374277   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt ha-381619:/home/docker/cp-test_ha-381619-m03_ha-381619.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test_ha-381619-m03_ha-381619.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt ha-381619-m02:/home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test_ha-381619-m03_ha-381619-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m03:/home/docker/cp-test.txt ha-381619-m04:/home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test_ha-381619-m03_ha-381619-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp testdata/cp-test.txt ha-381619-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile124664606/001/cp-test_ha-381619-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt ha-381619:/home/docker/cp-test_ha-381619-m04_ha-381619.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619 "sudo cat /home/docker/cp-test_ha-381619-m04_ha-381619.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt ha-381619-m02:/home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m02 "sudo cat /home/docker/cp-test_ha-381619-m04_ha-381619-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 cp ha-381619-m04:/home/docker/cp-test.txt ha-381619-m03:/home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 ssh -n ha-381619-m03 "sudo cat /home/docker/cp-test_ha-381619-m04_ha-381619-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-381619 node delete m03 -v=7 --alsologtostderr: (15.901139356s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (343.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-381619 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 17:40:33.437674   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:43:38.394833   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-381619 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m43.080617235s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (343.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-381619 --control-plane -v=7 --alsologtostderr
E1028 17:45:01.461010   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:45:33.436693   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-381619 --control-plane -v=7 --alsologtostderr: (1m21.259529016s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-381619 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-403568 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-403568 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.101287727s)
--- PASS: TestJSONOutput/start/Command (83.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-403568 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-403568 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-403568 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-403568 --output=json --user=testUser: (7.338788782s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-369162 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-369162 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.400453ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"97085807-d20d-404a-9d52-bd4dc3adb484","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-369162] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f44ccdbc-7ef8-4c64-99df-506ae8236c08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19872"}}
	{"specversion":"1.0","id":"c5c82c34-48fc-46bc-84a6-8fbcdef59e12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5ef41219-07d0-43c0-b61e-e77262e90747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig"}}
	{"specversion":"1.0","id":"8130f554-b41c-4c1a-acb7-ea5653fec305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube"}}
	{"specversion":"1.0","id":"009aec0e-6bfa-4fc7-88d1-a25753a65bd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bdaaf6aa-bf06-4627-bc4d-c45eb0233640","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3ea361c-6e59-4460-b0e7-2fc78f996e4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-369162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-369162
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (90.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-058139 --driver=kvm2  --container-runtime=crio
E1028 17:48:38.398469   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-058139 --driver=kvm2  --container-runtime=crio: (44.331926573s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-091844 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-091844 --driver=kvm2  --container-runtime=crio: (42.925059957s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-058139
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-091844
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-091844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-091844
helpers_test.go:175: Cleaning up "first-058139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-058139
--- PASS: TestMinikubeProfile (90.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-112586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-112586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.738444192s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-112586 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-112586 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-129393 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-129393 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.940706648s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-129393 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-129393 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-112586 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-129393 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-129393 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-129393
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-129393: (1.268146045s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-129393
E1028 17:50:33.437695   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-129393: (21.88880818s)
--- PASS: TestMountStart/serial/RestartStopped (22.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-129393 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-129393 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.073275088s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-949956 -- rollout status deployment/busybox: (6.928272792s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dlps5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dvw7s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dlps5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dvw7s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dlps5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dvw7s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dlps5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dlps5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dvw7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7dff88458-dvw7s -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-949956 -v 3 --alsologtostderr
E1028 17:53:36.507012   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 17:53:38.395675   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-949956 -v 3 --alsologtostderr: (55.362534204s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-949956 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp testdata/cp-test.txt multinode-949956:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile997746669/001/cp-test_multinode-949956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956:/home/docker/cp-test.txt multinode-949956-m02:/home/docker/cp-test_multinode-949956_multinode-949956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test_multinode-949956_multinode-949956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956:/home/docker/cp-test.txt multinode-949956-m03:/home/docker/cp-test_multinode-949956_multinode-949956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test_multinode-949956_multinode-949956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp testdata/cp-test.txt multinode-949956-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile997746669/001/cp-test_multinode-949956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt multinode-949956:/home/docker/cp-test_multinode-949956-m02_multinode-949956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test_multinode-949956-m02_multinode-949956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt multinode-949956-m03:/home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp testdata/cp-test.txt multinode-949956-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile997746669/001/cp-test_multinode-949956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt multinode-949956:/home/docker/cp-test_multinode-949956-m03_multinode-949956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test_multinode-949956-m03_multinode-949956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt multinode-949956-m02:/home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 node stop m03: (1.469333691s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-949956 status: exit status 7 (406.200395ms)

                                                
                                                
-- stdout --
	multinode-949956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-949956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-949956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr: exit status 7 (405.799163ms)

                                                
                                                
-- stdout --
	multinode-949956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-949956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-949956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 17:53:59.460673   48382 out.go:345] Setting OutFile to fd 1 ...
	I1028 17:53:59.461127   48382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:53:59.461142   48382 out.go:358] Setting ErrFile to fd 2...
	I1028 17:53:59.461149   48382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 17:53:59.461612   48382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 17:53:59.461906   48382 out.go:352] Setting JSON to false
	I1028 17:53:59.461948   48382 mustload.go:65] Loading cluster: multinode-949956
	I1028 17:53:59.461996   48382 notify.go:220] Checking for updates...
	I1028 17:53:59.462556   48382 config.go:182] Loaded profile config "multinode-949956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 17:53:59.462582   48382 status.go:174] checking status of multinode-949956 ...
	I1028 17:53:59.462989   48382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:53:59.463045   48382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:53:59.478512   48382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42515
	I1028 17:53:59.478978   48382 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:53:59.479612   48382 main.go:141] libmachine: Using API Version  1
	I1028 17:53:59.479627   48382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:53:59.480120   48382 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:53:59.480315   48382 main.go:141] libmachine: (multinode-949956) Calling .GetState
	I1028 17:53:59.481947   48382 status.go:371] multinode-949956 host status = "Running" (err=<nil>)
	I1028 17:53:59.481962   48382 host.go:66] Checking if "multinode-949956" exists ...
	I1028 17:53:59.482297   48382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:53:59.482340   48382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:53:59.497357   48382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I1028 17:53:59.497731   48382 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:53:59.498152   48382 main.go:141] libmachine: Using API Version  1
	I1028 17:53:59.498175   48382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:53:59.498496   48382 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:53:59.498654   48382 main.go:141] libmachine: (multinode-949956) Calling .GetIP
	I1028 17:53:59.501119   48382 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:53:59.501457   48382 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:53:59.501480   48382 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:53:59.501607   48382 host.go:66] Checking if "multinode-949956" exists ...
	I1028 17:53:59.501875   48382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:53:59.501907   48382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:53:59.517357   48382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I1028 17:53:59.517708   48382 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:53:59.518150   48382 main.go:141] libmachine: Using API Version  1
	I1028 17:53:59.518168   48382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:53:59.518435   48382 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:53:59.518610   48382 main.go:141] libmachine: (multinode-949956) Calling .DriverName
	I1028 17:53:59.518768   48382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:53:59.518785   48382 main.go:141] libmachine: (multinode-949956) Calling .GetSSHHostname
	I1028 17:53:59.521294   48382 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:53:59.521683   48382 main.go:141] libmachine: (multinode-949956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:c7:9f", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:51:01 +0000 UTC Type:0 Mac:52:54:00:b7:c7:9f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-949956 Clientid:01:52:54:00:b7:c7:9f}
	I1028 17:53:59.521711   48382 main.go:141] libmachine: (multinode-949956) DBG | domain multinode-949956 has defined IP address 192.168.39.203 and MAC address 52:54:00:b7:c7:9f in network mk-multinode-949956
	I1028 17:53:59.521837   48382 main.go:141] libmachine: (multinode-949956) Calling .GetSSHPort
	I1028 17:53:59.521984   48382 main.go:141] libmachine: (multinode-949956) Calling .GetSSHKeyPath
	I1028 17:53:59.522126   48382 main.go:141] libmachine: (multinode-949956) Calling .GetSSHUsername
	I1028 17:53:59.522252   48382 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1028 17:53:59.603947   48382 ssh_runner.go:195] Run: systemctl --version
	I1028 17:53:59.609743   48382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:53:59.623578   48382 kubeconfig.go:125] found "multinode-949956" server: "https://192.168.39.203:8443"
	I1028 17:53:59.623604   48382 api_server.go:166] Checking apiserver status ...
	I1028 17:53:59.623632   48382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 17:53:59.636442   48382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1058/cgroup
	W1028 17:53:59.645613   48382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1028 17:53:59.645671   48382 ssh_runner.go:195] Run: ls
	I1028 17:53:59.650345   48382 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1028 17:53:59.654263   48382 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1028 17:53:59.654281   48382 status.go:463] multinode-949956 apiserver status = Running (err=<nil>)
	I1028 17:53:59.654289   48382 status.go:176] multinode-949956 status: &{Name:multinode-949956 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:53:59.654303   48382 status.go:174] checking status of multinode-949956-m02 ...
	I1028 17:53:59.654582   48382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:53:59.654610   48382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:53:59.670546   48382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I1028 17:53:59.671025   48382 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:53:59.671512   48382 main.go:141] libmachine: Using API Version  1
	I1028 17:53:59.671537   48382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:53:59.671818   48382 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:53:59.671998   48382 main.go:141] libmachine: (multinode-949956-m02) Calling .GetState
	I1028 17:53:59.673566   48382 status.go:371] multinode-949956-m02 host status = "Running" (err=<nil>)
	I1028 17:53:59.673583   48382 host.go:66] Checking if "multinode-949956-m02" exists ...
	I1028 17:53:59.673968   48382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:53:59.674038   48382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:53:59.688420   48382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I1028 17:53:59.688874   48382 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:53:59.689293   48382 main.go:141] libmachine: Using API Version  1
	I1028 17:53:59.689311   48382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:53:59.689597   48382 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:53:59.689766   48382 main.go:141] libmachine: (multinode-949956-m02) Calling .GetIP
	I1028 17:53:59.692574   48382 main.go:141] libmachine: (multinode-949956-m02) DBG | domain multinode-949956-m02 has defined MAC address 52:54:00:fc:69:ed in network mk-multinode-949956
	I1028 17:53:59.692957   48382 main.go:141] libmachine: (multinode-949956-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:69:ed", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:52:04 +0000 UTC Type:0 Mac:52:54:00:fc:69:ed Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-949956-m02 Clientid:01:52:54:00:fc:69:ed}
	I1028 17:53:59.692984   48382 main.go:141] libmachine: (multinode-949956-m02) DBG | domain multinode-949956-m02 has defined IP address 192.168.39.100 and MAC address 52:54:00:fc:69:ed in network mk-multinode-949956
	I1028 17:53:59.693074   48382 host.go:66] Checking if "multinode-949956-m02" exists ...
	I1028 17:53:59.693388   48382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:53:59.693424   48382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:53:59.708394   48382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1028 17:53:59.708806   48382 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:53:59.709294   48382 main.go:141] libmachine: Using API Version  1
	I1028 17:53:59.709313   48382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:53:59.709585   48382 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:53:59.709739   48382 main.go:141] libmachine: (multinode-949956-m02) Calling .DriverName
	I1028 17:53:59.709893   48382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 17:53:59.709907   48382 main.go:141] libmachine: (multinode-949956-m02) Calling .GetSSHHostname
	I1028 17:53:59.712519   48382 main.go:141] libmachine: (multinode-949956-m02) DBG | domain multinode-949956-m02 has defined MAC address 52:54:00:fc:69:ed in network mk-multinode-949956
	I1028 17:53:59.712911   48382 main.go:141] libmachine: (multinode-949956-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:69:ed", ip: ""} in network mk-multinode-949956: {Iface:virbr1 ExpiryTime:2024-10-28 18:52:04 +0000 UTC Type:0 Mac:52:54:00:fc:69:ed Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-949956-m02 Clientid:01:52:54:00:fc:69:ed}
	I1028 17:53:59.712930   48382 main.go:141] libmachine: (multinode-949956-m02) DBG | domain multinode-949956-m02 has defined IP address 192.168.39.100 and MAC address 52:54:00:fc:69:ed in network mk-multinode-949956
	I1028 17:53:59.713103   48382 main.go:141] libmachine: (multinode-949956-m02) Calling .GetSSHPort
	I1028 17:53:59.713333   48382 main.go:141] libmachine: (multinode-949956-m02) Calling .GetSSHKeyPath
	I1028 17:53:59.713467   48382 main.go:141] libmachine: (multinode-949956-m02) Calling .GetSSHUsername
	I1028 17:53:59.713568   48382 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19872-13443/.minikube/machines/multinode-949956-m02/id_rsa Username:docker}
	I1028 17:53:59.791378   48382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 17:53:59.804610   48382 status.go:176] multinode-949956-m02 status: &{Name:multinode-949956-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1028 17:53:59.804639   48382 status.go:174] checking status of multinode-949956-m03 ...
	I1028 17:53:59.804927   48382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 17:53:59.804961   48382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 17:53:59.820245   48382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I1028 17:53:59.820688   48382 main.go:141] libmachine: () Calling .GetVersion
	I1028 17:53:59.821207   48382 main.go:141] libmachine: Using API Version  1
	I1028 17:53:59.821228   48382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 17:53:59.821540   48382 main.go:141] libmachine: () Calling .GetMachineName
	I1028 17:53:59.821743   48382 main.go:141] libmachine: (multinode-949956-m03) Calling .GetState
	I1028 17:53:59.823382   48382 status.go:371] multinode-949956-m03 host status = "Stopped" (err=<nil>)
	I1028 17:53:59.823393   48382 status.go:384] host is not running, skipping remaining checks
	I1028 17:53:59.823398   48382 status.go:176] multinode-949956-m03 status: &{Name:multinode-949956-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 node start m03 -v=7 --alsologtostderr: (40.470658586s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 node delete m03: (1.64968622s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 18:03:38.398903   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:05:33.435800   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.939753296s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-949956
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-949956-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.321816ms)

                                                
                                                
-- stdout --
	* [multinode-949956-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-949956-m02' is duplicated with machine name 'multinode-949956-m02' in profile 'multinode-949956'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956-m03 --driver=kvm2  --container-runtime=crio: (41.368347261s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-949956
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-949956: exit status 80 (205.864349ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-949956 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-949956-m03 already exists in multinode-949956-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-949956-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.65s)

                                                
                                    
x
+
TestScheduledStopUnix (110.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-525736 --memory=2048 --driver=kvm2  --container-runtime=crio
E1028 18:10:33.436835   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-525736 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.007848813s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-525736 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-525736 -n scheduled-stop-525736
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-525736 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1028 18:11:03.821940   20680 retry.go:31] will retry after 131.732µs: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.823080   20680 retry.go:31] will retry after 175.399µs: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.824198   20680 retry.go:31] will retry after 150.405µs: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.825318   20680 retry.go:31] will retry after 292.183µs: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.826427   20680 retry.go:31] will retry after 722.347µs: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.827536   20680 retry.go:31] will retry after 589.618µs: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.828672   20680 retry.go:31] will retry after 1.646303ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.830876   20680 retry.go:31] will retry after 1.626469ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.833085   20680 retry.go:31] will retry after 3.039299ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.836214   20680 retry.go:31] will retry after 5.389439ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.842444   20680 retry.go:31] will retry after 3.879504ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.846684   20680 retry.go:31] will retry after 7.419494ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.854891   20680 retry.go:31] will retry after 15.052914ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.870054   20680 retry.go:31] will retry after 23.605079ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
I1028 18:11:03.894292   20680 retry.go:31] will retry after 18.60514ms: open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/scheduled-stop-525736/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-525736 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-525736 -n scheduled-stop-525736
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-525736
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-525736 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-525736
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-525736: exit status 7 (62.437936ms)

                                                
                                                
-- stdout --
	scheduled-stop-525736
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-525736 -n scheduled-stop-525736
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-525736 -n scheduled-stop-525736: exit status 7 (61.922259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-525736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-525736
--- PASS: TestScheduledStopUnix (110.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (183.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2322071829 start -p running-upgrade-703793 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2322071829 start -p running-upgrade-703793 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m12.387649113s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-703793 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-703793 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m47.141043757s)
helpers_test.go:175: Cleaning up "running-upgrade-703793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-703793
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-703793: (1.176508961s)
--- PASS: TestRunningBinaryUpgrade (183.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (174.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2235976009 start -p stopped-upgrade-165190 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
I1028 18:12:19.752709   20680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 18:12:24.021858   20680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1028 18:12:24.048847   20680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1028 18:12:24.048878   20680 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1028 18:12:24.048947   20680 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 18:12:24.048978   20680 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4277925268/002/docker-machine-driver-kvm2
I1028 18:12:24.384542   20680 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4277925268/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000792400 gz:0xc000792408 tar:0xc0007923a0 tar.bz2:0xc0007923c0 tar.gz:0xc0007923d0 tar.xz:0xc0007923e0 tar.zst:0xc0007923f0 tbz2:0xc0007923c0 tgz:0xc0007923d0 txz:0xc0007923e0 tzst:0xc0007923f0 xz:0xc000792410 zip:0xc000792420 zst:0xc000792418] Getters:map[file:0xc000c025f0 http:0xc00073ec80 https:0xc00073ecd0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 18:12:24.384594   20680 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4277925268/002/docker-machine-driver-kvm2
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2235976009 start -p stopped-upgrade-165190 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m7.372457968s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2235976009 -p stopped-upgrade-165190 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2235976009 -p stopped-upgrade-165190 stop: (2.123433464s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-165190 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-165190 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.847746375s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (174.34s)

                                                
                                    
x
+
TestPause/serial/Start (94.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-006166 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1028 18:13:38.395510   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-006166 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m34.873496092s)
--- PASS: TestPause/serial/Start (94.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793119 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-793119 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (62.18234ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-793119] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (52.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793119 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793119 --driver=kvm2  --container-runtime=crio: (52.738155884s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-793119 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (52.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793119 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793119 --no-kubernetes --driver=kvm2  --container-runtime=crio: (16.40047084s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-793119 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-793119 status -o json: exit status 2 (240.015445ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-793119","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-793119
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-793119: (1.030159066s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793119 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793119 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.074147528s)
--- PASS: TestNoKubernetes/serial/Start (28.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-457876 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-457876 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (99.277912ms)

                                                
                                                
-- stdout --
	* [false-457876] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 18:15:06.339012   58673 out.go:345] Setting OutFile to fd 1 ...
	I1028 18:15:06.339133   58673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:15:06.339144   58673 out.go:358] Setting ErrFile to fd 2...
	I1028 18:15:06.339150   58673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 18:15:06.339411   58673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-13443/.minikube/bin
	I1028 18:15:06.340129   58673 out.go:352] Setting JSON to false
	I1028 18:15:06.341425   58673 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7049,"bootTime":1730132257,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 18:15:06.341503   58673 start.go:139] virtualization: kvm guest
	I1028 18:15:06.343619   58673 out.go:177] * [false-457876] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 18:15:06.344839   58673 notify.go:220] Checking for updates...
	I1028 18:15:06.344877   58673 out.go:177]   - MINIKUBE_LOCATION=19872
	I1028 18:15:06.346091   58673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 18:15:06.347438   58673 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19872-13443/kubeconfig
	I1028 18:15:06.348716   58673 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-13443/.minikube
	I1028 18:15:06.349965   58673 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 18:15:06.351242   58673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 18:15:06.352965   58673 config.go:182] Loaded profile config "NoKubernetes-793119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1028 18:15:06.353072   58673 config.go:182] Loaded profile config "kubernetes-upgrade-192352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 18:15:06.353167   58673 config.go:182] Loaded profile config "stopped-upgrade-165190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 18:15:06.353257   58673 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 18:15:06.388162   58673 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 18:15:06.389274   58673 start.go:297] selected driver: kvm2
	I1028 18:15:06.389286   58673 start.go:901] validating driver "kvm2" against <nil>
	I1028 18:15:06.389296   58673 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 18:15:06.391046   58673 out.go:201] 
	W1028 18:15:06.392115   58673 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1028 18:15:06.393221   58673 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-457876 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-457876" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 18:15:00 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.163:8443
name: stopped-upgrade-165190
contexts:
- context:
cluster: stopped-upgrade-165190
user: stopped-upgrade-165190
name: stopped-upgrade-165190
current-context: stopped-upgrade-165190
kind: Config
preferences: {}
users:
- name: stopped-upgrade-165190
user:
client-certificate: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/client.crt
client-key: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-457876

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457876"

                                                
                                                
----------------------- debugLogs end: false-457876 [took: 2.871504392s] --------------------------------
helpers_test.go:175: Cleaning up "false-457876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-457876
--- PASS: TestNetworkPlugins/group/false (3.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-165190
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-793119 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-793119 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.804185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-793119
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-793119: (1.278091613s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (69.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793119 --driver=kvm2  --container-runtime=crio
E1028 18:15:33.435639   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793119 --driver=kvm2  --container-runtime=crio: (1m9.153594413s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (69.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-793119 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-793119 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.833561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-051152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-051152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m47.060800907s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-021370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 18:20:33.436026   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-021370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m14.393501305s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-021370 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4489e423-7a83-4fe5-b1b9-03ab14427a87] Pending
helpers_test.go:344: "busybox" [4489e423-7a83-4fe5-b1b9-03ab14427a87] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4489e423-7a83-4fe5-b1b9-03ab14427a87] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.004943011s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-021370 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-021370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-021370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068968571s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-021370 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-051152 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd72dfa9-7469-4eae-89c6-a89387a8d443] Pending
helpers_test.go:344: "busybox" [bd72dfa9-7469-4eae-89c6-a89387a8d443] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd72dfa9-7469-4eae-89c6-a89387a8d443] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.00500887s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-051152 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-051152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-051152 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-692033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-692033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m29.340888805s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [259c71f2-3d85-4e19-96f7-9467983bd8ab] Pending
helpers_test.go:344: "busybox" [259c71f2-3d85-4e19-96f7-9467983bd8ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [259c71f2-3d85-4e19-96f7-9467983bd8ab] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004765023s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-692033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-692033 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (676s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-021370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-021370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m15.757770399s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021370 -n embed-certs-021370
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (676.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (599.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-051152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-051152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m59.156435604s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-051152 -n no-preload-051152
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (599.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-223868 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-223868 --alsologtostderr -v=3: (3.282244685s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223868 -n old-k8s-version-223868: exit status 7 (62.687879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-223868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (520.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-692033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 18:26:56.512973   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:28:38.394789   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:30:33.435745   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:33:38.394773   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-692033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (8m40.638304392s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-692033 -n default-k8s-diff-port-692033
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (520.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-724173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 18:48:38.395218   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-724173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (45.274626656s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-724173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-724173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12316745s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-724173 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-724173 --alsologtostderr -v=3: (10.320531884s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-724173 -n newest-cni-724173
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-724173 -n newest-cni-724173: exit status 7 (66.131246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-724173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-724173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-724173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (37.651676377s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-724173 -n newest-cni-724173
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m38.116196896s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (95.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m35.14623279s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (95.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-724173 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-724173 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-724173 -n newest-cni-724173
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-724173 -n newest-cni-724173: exit status 2 (233.854051ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-724173 -n newest-cni-724173
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-724173 -n newest-cni-724173: exit status 2 (233.638639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-724173 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-724173 -n newest-cni-724173
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-724173 -n newest-cni-724173
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (110.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1028 18:50:33.436347   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/addons-186035/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m50.567179744s)
--- PASS: TestNetworkPlugins/group/flannel/Start (110.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-457876 "pgrep -a kubelet"
I1028 18:50:57.675960   20680 config.go:182] Loaded profile config "auto-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-457876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6ndgb" [9127effa-337e-4554-832f-208b2da605b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6ndgb" [9127effa-337e-4554-832f-208b2da605b0] Running
E1028 18:51:03.321491   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:03.327866   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:03.339211   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:03.360588   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:03.402356   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:03.483916   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:03.645410   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:03.966999   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:04.609085   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:05.891366   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004935929s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-c4wgd" [dbd03dfa-95eb-45b9-8bb2-5b7ca5df9285] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004028522s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-457876 "pgrep -a kubelet"
I1028 18:51:06.592602   20680 config.go:182] Loaded profile config "kindnet-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-457876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xrsms" [e9f7a34f-1027-47f6-91e9-b25fca82b57e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 18:51:08.453395   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-xrsms" [e9f7a34f-1027-47f6-91e9-b25fca82b57e] Running
E1028 18:51:13.575202   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004707336s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-457876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-457876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m21.443851868s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m23.170587754s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (119.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m59.19834809s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (119.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-52s99" [aac37ce5-981a-44f9-a2e2-523eb92a3951] Running
E1028 18:51:41.470031   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:44.299044   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005494614s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-457876 "pgrep -a kubelet"
I1028 18:51:46.128600   20680 config.go:182] Loaded profile config "flannel-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-457876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8mr4s" [d37710a6-4918-4522-8cdb-78bc80841202] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 18:51:50.577173   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:50.583571   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:50.594912   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:50.616246   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:50.657576   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:50.739738   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:50.902007   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:51.223517   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-8mr4s" [d37710a6-4918-4522-8cdb-78bc80841202] Running
E1028 18:51:51.865690   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:53.147977   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:51:55.710218   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005233961s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-457876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (114.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1028 18:52:25.260925   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/no-preload-051152/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:31.556442   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/old-k8s-version-223868/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-457876 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m54.366008309s)
--- PASS: TestNetworkPlugins/group/calico/Start (114.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-457876 "pgrep -a kubelet"
I1028 18:52:46.510336   20680 config.go:182] Loaded profile config "enable-default-cni-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-457876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qgpvz" [fd68c489-ed21-46e4-aa3f-dd1d41ccc83e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qgpvz" [fd68c489-ed21-46e4-aa3f-dd1d41ccc83e] Running
E1028 18:52:55.785798   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004503688s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-457876 "pgrep -a kubelet"
I1028 18:52:49.328238   20680 config.go:182] Loaded profile config "bridge-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-457876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-457876 replace --force -f testdata/netcat-deployment.yaml: (1.151726246s)
I1028 18:52:50.496901   20680 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1028 18:52:50.509666   20680 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-crhgz" [bc686b9a-8ad1-4d13-9dea-019fda426bf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 18:52:50.655061   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:50.661418   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:50.672757   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:50.694168   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:50.735644   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:50.817345   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:50.978849   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:51.300558   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:51.942410   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
E1028 18:52:53.223816   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-crhgz" [bc686b9a-8ad1-4d13-9dea-019fda426bf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004850818s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-457876 exec deployment/netcat -- nslookup kubernetes.default
E1028 18:53:00.907430   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-457876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-457876 "pgrep -a kubelet"
I1028 18:53:32.937348   20680 config.go:182] Loaded profile config "custom-flannel-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-457876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gsnm7" [7693ec69-fba0-4287-90f5-f8dfc606749c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gsnm7" [7693ec69-fba0-4287-90f5-f8dfc606749c] Running
E1028 18:53:38.394995   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/functional-972498/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00375465s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-457876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ljmd4" [dbeec1d6-a475-4df5-97f5-4df8dd0406d4] Running
E1028 18:54:12.592578   20680 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/default-k8s-diff-port-692033/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004652055s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-457876 "pgrep -a kubelet"
I1028 18:54:13.287861   20680 config.go:182] Loaded profile config "calico-457876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-457876 replace --force -f testdata/netcat-deployment.yaml
I1028 18:54:13.501691   20680 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9z5fc" [5b410814-4ecb-494e-bdbb-38b6c25063b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9z5fc" [5b410814-4ecb-494e-bdbb-38b6c25063b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00373434s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-457876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-457876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    

Test skip (39/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.13
271 TestNetworkPlugins/group/kubenet 3.24
279 TestNetworkPlugins/group/cilium 3.5
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-186035 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-976691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-976691
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-457876 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-457876" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 18:15:00 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.163:8443
name: stopped-upgrade-165190
contexts:
- context:
cluster: stopped-upgrade-165190
user: stopped-upgrade-165190
name: stopped-upgrade-165190
current-context: stopped-upgrade-165190
kind: Config
preferences: {}
users:
- name: stopped-upgrade-165190
user:
client-certificate: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/client.crt
client-key: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-457876

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457876"

                                                
                                                
----------------------- debugLogs end: kubenet-457876 [took: 3.098625532s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-457876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-457876
--- SKIP: TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-457876 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-457876" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19872-13443/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 18:15:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.163:8443
name: stopped-upgrade-165190
contexts:
- context:
cluster: stopped-upgrade-165190
extensions:
- extension:
last-update: Mon, 28 Oct 2024 18:15:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: stopped-upgrade-165190
name: stopped-upgrade-165190
current-context: stopped-upgrade-165190
kind: Config
preferences: {}
users:
- name: stopped-upgrade-165190
user:
client-certificate: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/client.crt
client-key: /home/jenkins/minikube-integration/19872-13443/.minikube/profiles/stopped-upgrade-165190/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-457876

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-457876" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457876"

                                                
                                                
----------------------- debugLogs end: cilium-457876 [took: 3.341256383s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-457876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-457876
--- SKIP: TestNetworkPlugins/group/cilium (3.50s)

                                                
                                    
Copied to clipboard